Test Report: KVM_Linux_crio 21139

                    
                      c4345f2baa4ca80c4898fac9368be2207cfcb3f0:2025-11-09:42265
                    
                

Test fail (23/345)

x
+
TestAddons/parallel/Ingress (492.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-640912 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-640912 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-640912 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [d25ea36b-0cab-4e93-a461-46fc1de68cdc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-640912 -n addons-640912
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-11-09 13:40:16.296039111 +0000 UTC m=+687.018513497
addons_test.go:252: (dbg) Run:  kubectl --context addons-640912 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-640912 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-640912/192.168.39.228
Start Time:       Sun, 09 Nov 2025 13:32:15 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.30
IPs:
IP:  10.244.0.30
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nxkzm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-nxkzm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  8m1s                 default-scheduler  Successfully assigned default/nginx to addons-640912
Normal   Pulling    114s (x4 over 8m)    kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     62s (x4 over 6m48s)  kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     62s (x4 over 6m48s)  kubelet            Error: ErrImagePull
Normal   BackOff    11s (x9 over 6m47s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     11s (x9 over 6m47s)  kubelet            Error: ImagePullBackOff
addons_test.go:252: (dbg) Run:  kubectl --context addons-640912 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-640912 logs nginx -n default: exit status 1 (87.92226ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-640912 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-640912 -n addons-640912
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-640912 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-640912 logs -n 25: (1.494372116s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ delete  │ -p download-only-045678                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-045678 │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ delete  │ -p download-only-969818                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-969818 │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ delete  │ -p download-only-045678                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-045678 │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ start   │ --download-only -p binary-mirror-045777 --alsologtostderr --binary-mirror http://127.0.0.1:41935 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-045777 │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │                     │
	│ delete  │ -p binary-mirror-045777                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-045777 │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ addons  │ enable dashboard -p addons-640912                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │                     │
	│ addons  │ disable dashboard -p addons-640912                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │                     │
	│ start   │ -p addons-640912 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:31 UTC │
	│ addons  │ addons-640912 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │ 09 Nov 25 13:31 UTC │
	│ addons  │ addons-640912 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │ 09 Nov 25 13:31 UTC │
	│ addons  │ enable headlamp -p addons-640912 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │ 09 Nov 25 13:31 UTC │
	│ addons  │ addons-640912 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │ 09 Nov 25 13:31 UTC │
	│ addons  │ addons-640912 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │ 09 Nov 25 13:31 UTC │
	│ addons  │ addons-640912 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ addons  │ addons-640912 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ addons  │ addons-640912 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ ip      │ addons-640912 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ addons  │ addons-640912 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ addons  │ addons-640912 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-640912                                                                                                                                                                                                                                                                                                                                                                                         │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ addons  │ addons-640912 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ addons  │ addons-640912 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:34 UTC │ 09 Nov 25 13:35 UTC │
	│ addons  │ addons-640912 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:38 UTC │ 09 Nov 25 13:38 UTC │
	│ addons  │ addons-640912 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:38 UTC │ 09 Nov 25 13:38 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 13:29:01
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 13:29:01.529521  554049 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:29:01.529783  554049 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:29:01.529806  554049 out.go:374] Setting ErrFile to fd 2...
	I1109 13:29:01.529811  554049 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:29:01.530042  554049 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
	I1109 13:29:01.530619  554049 out.go:368] Setting JSON to false
	I1109 13:29:01.531597  554049 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":69091,"bootTime":1762625851,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 13:29:01.531713  554049 start.go:143] virtualization: kvm guest
	I1109 13:29:01.533875  554049 out.go:179] * [addons-640912] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 13:29:01.535675  554049 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 13:29:01.535668  554049 notify.go:221] Checking for updates...
	I1109 13:29:01.538124  554049 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:29:01.539382  554049 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-549598/kubeconfig
	I1109 13:29:01.540720  554049 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 13:29:01.542038  554049 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 13:29:01.543437  554049 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 13:29:01.545291  554049 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:29:01.580555  554049 out.go:179] * Using the kvm2 driver based on user configuration
	I1109 13:29:01.581955  554049 start.go:309] selected driver: kvm2
	I1109 13:29:01.581991  554049 start.go:930] validating driver "kvm2" against <nil>
	I1109 13:29:01.582008  554049 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 13:29:01.582854  554049 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 13:29:01.583161  554049 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 13:29:01.583199  554049 cni.go:84] Creating CNI manager for ""
	I1109 13:29:01.583249  554049 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1109 13:29:01.583262  554049 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1109 13:29:01.583305  554049 start.go:353] cluster config:
	{Name:addons-640912 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-640912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1109 13:29:01.583400  554049 iso.go:125] acquiring lock: {Name:mk8f0547b4600c0b2d1e831f024251df19b55199 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 13:29:01.585006  554049 out.go:179] * Starting "addons-640912" primary control-plane node in "addons-640912" cluster
	I1109 13:29:01.586291  554049 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:29:01.586344  554049 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-549598/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1109 13:29:01.586355  554049 cache.go:65] Caching tarball of preloaded images
	I1109 13:29:01.586504  554049 preload.go:238] Found /home/jenkins/minikube-integration/21139-549598/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1109 13:29:01.586520  554049 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 13:29:01.586929  554049 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/config.json ...
	I1109 13:29:01.586963  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/config.json: {Name:mk64beb99f02d72e356fa001c0aedbf8dde60a9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:01.587175  554049 start.go:360] acquireMachinesLock for addons-640912: {Name:mkb2a0b4add9a99b18ce9ab72b74eb5b4fda0e0a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1109 13:29:01.587250  554049 start.go:364] duration metric: took 54.118µs to acquireMachinesLock for "addons-640912"
	I1109 13:29:01.587279  554049 start.go:93] Provisioning new machine with config: &{Name:addons-640912 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-640912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 13:29:01.587339  554049 start.go:125] createHost starting for "" (driver="kvm2")
	I1109 13:29:01.588964  554049 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1109 13:29:01.589196  554049 start.go:159] libmachine.API.Create for "addons-640912" (driver="kvm2")
	I1109 13:29:01.589238  554049 client.go:173] LocalClient.Create starting
	I1109 13:29:01.589385  554049 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem
	I1109 13:29:01.866031  554049 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem
	I1109 13:29:02.376066  554049 main.go:143] libmachine: creating domain...
	I1109 13:29:02.376091  554049 main.go:143] libmachine: creating network...
	I1109 13:29:02.377887  554049 main.go:143] libmachine: found existing default network
	I1109 13:29:02.378145  554049 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1109 13:29:02.378765  554049 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f10d50}
	I1109 13:29:02.378922  554049 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-640912</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1109 13:29:02.385552  554049 main.go:143] libmachine: creating private network mk-addons-640912 192.168.39.0/24...
	I1109 13:29:02.480263  554049 main.go:143] libmachine: private network mk-addons-640912 192.168.39.0/24 created
	I1109 13:29:02.480592  554049 main.go:143] libmachine: <network>
	  <name>mk-addons-640912</name>
	  <uuid>5093d52e-d83e-4496-8f74-950632b55811</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:c3:49:16'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1109 13:29:02.480645  554049 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912 ...
	I1109 13:29:02.480684  554049 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21139-549598/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso
	I1109 13:29:02.480700  554049 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 13:29:02.480786  554049 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21139-549598/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21139-549598/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso...
	I1109 13:29:02.790875  554049 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa...
	I1109 13:29:03.048683  554049 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/addons-640912.rawdisk...
	I1109 13:29:03.048760  554049 main.go:143] libmachine: Writing magic tar header
	I1109 13:29:03.048789  554049 main.go:143] libmachine: Writing SSH key tar header
	I1109 13:29:03.048939  554049 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912 ...
	I1109 13:29:03.049043  554049 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912
	I1109 13:29:03.049096  554049 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912 (perms=drwx------)
	I1109 13:29:03.049130  554049 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21139-549598/.minikube/machines
	I1109 13:29:03.049146  554049 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21139-549598/.minikube/machines (perms=drwxr-xr-x)
	I1109 13:29:03.049170  554049 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 13:29:03.049190  554049 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21139-549598/.minikube (perms=drwxr-xr-x)
	I1109 13:29:03.049210  554049 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21139-549598
	I1109 13:29:03.049226  554049 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21139-549598 (perms=drwxrwxr-x)
	I1109 13:29:03.049244  554049 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1109 13:29:03.049267  554049 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1109 13:29:03.049283  554049 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1109 13:29:03.049298  554049 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1109 13:29:03.049317  554049 main.go:143] libmachine: checking permissions on dir: /home
	I1109 13:29:03.049332  554049 main.go:143] libmachine: skipping /home - not owner
	I1109 13:29:03.049347  554049 main.go:143] libmachine: defining domain...
	I1109 13:29:03.051070  554049 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-640912</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/addons-640912.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-640912'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1109 13:29:03.059933  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:f8:20:b0 in network default
	I1109 13:29:03.060875  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:03.060908  554049 main.go:143] libmachine: starting domain...
	I1109 13:29:03.060914  554049 main.go:143] libmachine: ensuring networks are active...
	I1109 13:29:03.062198  554049 main.go:143] libmachine: Ensuring network default is active
	I1109 13:29:03.062950  554049 main.go:143] libmachine: Ensuring network mk-addons-640912 is active
	I1109 13:29:03.064049  554049 main.go:143] libmachine: getting domain XML...
	I1109 13:29:03.066087  554049 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-640912</name>
	  <uuid>50c2653c-dfbf-41f9-bef0-624b1a679070</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/addons-640912.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:3b:97:c4'/>
	      <source network='mk-addons-640912'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:f8:20:b0'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1109 13:29:04.518458  554049 main.go:143] libmachine: waiting for domain to start...
	I1109 13:29:04.520285  554049 main.go:143] libmachine: domain is now running
	I1109 13:29:04.520317  554049 main.go:143] libmachine: waiting for IP...
	I1109 13:29:04.521463  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:04.522572  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:04.522598  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:04.523028  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:04.523130  554049 retry.go:31] will retry after 248.555943ms: waiting for domain to come up
	I1109 13:29:04.773776  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:04.774727  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:04.774751  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:04.775169  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:04.775219  554049 retry.go:31] will retry after 253.374239ms: waiting for domain to come up
	I1109 13:29:05.030329  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:05.031648  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:05.031676  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:05.032301  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:05.032357  554049 retry.go:31] will retry after 460.991203ms: waiting for domain to come up
	I1109 13:29:05.495209  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:05.495935  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:05.495953  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:05.496394  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:05.496445  554049 retry.go:31] will retry after 488.671936ms: waiting for domain to come up
	I1109 13:29:05.987310  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:05.988315  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:05.988337  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:05.988678  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:05.988724  554049 retry.go:31] will retry after 734.270823ms: waiting for domain to come up
	I1109 13:29:06.724517  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:06.725451  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:06.725483  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:06.726091  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:06.726145  554049 retry.go:31] will retry after 813.958486ms: waiting for domain to come up
	I1109 13:29:07.541351  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:07.542188  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:07.542215  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:07.542584  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:07.542638  554049 retry.go:31] will retry after 773.028537ms: waiting for domain to come up
	I1109 13:29:08.317882  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:08.318758  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:08.318779  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:08.319182  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:08.319228  554049 retry.go:31] will retry after 902.625899ms: waiting for domain to come up
	I1109 13:29:09.223517  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:09.224270  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:09.224291  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:09.224645  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:09.224698  554049 retry.go:31] will retry after 1.447427193s: waiting for domain to come up
	I1109 13:29:10.674526  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:10.675369  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:10.675411  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:10.675832  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:10.675890  554049 retry.go:31] will retry after 1.413133453s: waiting for domain to come up
	I1109 13:29:12.090825  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:12.091679  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:12.091701  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:12.092074  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:12.092132  554049 retry.go:31] will retry after 1.812634142s: waiting for domain to come up
	I1109 13:29:13.907484  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:13.908470  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:13.908492  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:13.908953  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:13.909039  554049 retry.go:31] will retry after 3.291540475s: waiting for domain to come up
	I1109 13:29:17.202151  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:17.202984  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:17.203006  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:17.203397  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:17.203453  554049 retry.go:31] will retry after 4.480228837s: waiting for domain to come up
	I1109 13:29:21.685736  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:21.686518  554049 main.go:143] libmachine: domain addons-640912 has current primary IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:21.686537  554049 main.go:143] libmachine: found domain IP: 192.168.39.228
	I1109 13:29:21.686546  554049 main.go:143] libmachine: reserving static IP address...
	I1109 13:29:21.687020  554049 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-640912", mac: "52:54:00:3b:97:c4", ip: "192.168.39.228"} in network mk-addons-640912
	I1109 13:29:21.917975  554049 main.go:143] libmachine: reserved static IP address 192.168.39.228 for domain addons-640912
	I1109 13:29:21.918007  554049 main.go:143] libmachine: waiting for SSH...
	I1109 13:29:21.918016  554049 main.go:143] libmachine: Getting to WaitForSSH function...
	I1109 13:29:21.923685  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:21.924701  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:21.924754  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:21.925088  554049 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:21.925387  554049 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I1109 13:29:21.925408  554049 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1109 13:29:22.046606  554049 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 13:29:22.047386  554049 main.go:143] libmachine: domain creation complete
	I1109 13:29:22.050123  554049 machine.go:94] provisionDockerMachine start ...
	I1109 13:29:22.054988  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.055715  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:22.055765  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.056311  554049 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:22.056903  554049 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I1109 13:29:22.056974  554049 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 13:29:22.180617  554049 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1109 13:29:22.180660  554049 buildroot.go:166] provisioning hostname "addons-640912"
	I1109 13:29:22.186275  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.187117  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:22.187172  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.187501  554049 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:22.187787  554049 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I1109 13:29:22.187941  554049 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-640912 && echo "addons-640912" | sudo tee /etc/hostname
	I1109 13:29:22.341110  554049 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-640912
	
	I1109 13:29:22.345909  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.346909  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:22.346961  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.347366  554049 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:22.347633  554049 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I1109 13:29:22.347656  554049 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-640912' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-640912/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-640912' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 13:29:22.484440  554049 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 13:29:22.484470  554049 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21139-549598/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-549598/.minikube}
	I1109 13:29:22.484528  554049 buildroot.go:174] setting up certificates
	I1109 13:29:22.484547  554049 provision.go:84] configureAuth start
	I1109 13:29:22.488028  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.488482  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:22.488510  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.491209  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.491676  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:22.491713  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.491896  554049 provision.go:143] copyHostCerts
	I1109 13:29:22.492005  554049 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-549598/.minikube/key.pem (1679 bytes)
	I1109 13:29:22.492184  554049 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-549598/.minikube/ca.pem (1082 bytes)
	I1109 13:29:22.492340  554049 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-549598/.minikube/cert.pem (1123 bytes)
	I1109 13:29:22.492422  554049 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-549598/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca-key.pem org=jenkins.addons-640912 san=[127.0.0.1 192.168.39.228 addons-640912 localhost minikube]
	I1109 13:29:22.673233  554049 provision.go:177] copyRemoteCerts
	I1109 13:29:22.673315  554049 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 13:29:22.676789  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.677351  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:22.677382  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.677656  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:22.784762  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 13:29:22.825830  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1109 13:29:22.864504  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 13:29:22.902700  554049 provision.go:87] duration metric: took 418.129808ms to configureAuth
	I1109 13:29:22.902746  554049 buildroot.go:189] setting minikube options for container-runtime
	I1109 13:29:22.903033  554049 config.go:182] Loaded profile config "addons-640912": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:29:22.907271  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.907853  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:22.907882  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.908152  554049 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:22.908394  554049 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I1109 13:29:22.908415  554049 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 13:29:23.187121  554049 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 13:29:23.187169  554049 machine.go:97] duration metric: took 1.136996743s to provisionDockerMachine
	I1109 13:29:23.187186  554049 client.go:176] duration metric: took 21.597936799s to LocalClient.Create
	I1109 13:29:23.187206  554049 start.go:167] duration metric: took 21.598018749s to libmachine.API.Create "addons-640912"
	I1109 13:29:23.187218  554049 start.go:293] postStartSetup for "addons-640912" (driver="kvm2")
	I1109 13:29:23.187233  554049 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 13:29:23.187304  554049 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 13:29:23.190951  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.191437  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:23.191471  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.191673  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:23.284957  554049 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 13:29:23.290908  554049 info.go:137] Remote host: Buildroot 2025.02
	I1109 13:29:23.290944  554049 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-549598/.minikube/addons for local assets ...
	I1109 13:29:23.291033  554049 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-549598/.minikube/files for local assets ...
	I1109 13:29:23.291059  554049 start.go:296] duration metric: took 103.83477ms for postStartSetup
	I1109 13:29:23.294496  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.294979  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:23.295007  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.295298  554049 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/config.json ...
	I1109 13:29:23.295545  554049 start.go:128] duration metric: took 21.708191701s to createHost
	I1109 13:29:23.298433  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.298897  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:23.298929  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.299160  554049 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:23.299426  554049 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I1109 13:29:23.299443  554049 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1109 13:29:23.418842  554049 main.go:143] libmachine: SSH cmd err, output: <nil>: 1762694963.374684002
	
	I1109 13:29:23.418873  554049 fix.go:216] guest clock: 1762694963.374684002
	I1109 13:29:23.418882  554049 fix.go:229] Guest: 2025-11-09 13:29:23.374684002 +0000 UTC Remote: 2025-11-09 13:29:23.295558762 +0000 UTC m=+21.824848523 (delta=79.12524ms)
	I1109 13:29:23.418901  554049 fix.go:200] guest clock delta is within tolerance: 79.12524ms
	I1109 13:29:23.418908  554049 start.go:83] releasing machines lock for "addons-640912", held for 21.831643055s
	I1109 13:29:23.422763  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.423397  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:23.423435  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.424204  554049 ssh_runner.go:195] Run: cat /version.json
	I1109 13:29:23.424308  554049 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 13:29:23.428595  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.428753  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.429413  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:23.429427  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:23.429458  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.429456  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.429725  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:23.430070  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:23.517677  554049 ssh_runner.go:195] Run: systemctl --version
	I1109 13:29:23.548521  554049 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 13:29:23.725456  554049 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 13:29:23.734446  554049 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 13:29:23.734539  554049 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 13:29:23.762212  554049 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1109 13:29:23.762264  554049 start.go:496] detecting cgroup driver to use...
	I1109 13:29:23.762376  554049 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 13:29:23.789312  554049 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 13:29:23.811825  554049 docker.go:218] disabling cri-docker service (if available) ...
	I1109 13:29:23.811901  554049 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 13:29:23.835122  554049 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 13:29:23.857937  554049 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 13:29:24.036028  554049 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 13:29:24.258493  554049 docker.go:234] disabling docker service ...
	I1109 13:29:24.258579  554049 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 13:29:24.279139  554049 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 13:29:24.297344  554049 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 13:29:24.474651  554049 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 13:29:24.636841  554049 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 13:29:24.655525  554049 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 13:29:24.685964  554049 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 13:29:24.686029  554049 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:24.703327  554049 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 13:29:24.703429  554049 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:24.723542  554049 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:24.741826  554049 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:24.759313  554049 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 13:29:24.777075  554049 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:24.793754  554049 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:24.819676  554049 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:24.834925  554049 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 13:29:24.851569  554049 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1109 13:29:24.851655  554049 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1109 13:29:24.880394  554049 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 13:29:24.896630  554049 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:29:25.060770  554049 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 13:29:25.189371  554049 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 13:29:25.189516  554049 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 13:29:25.198545  554049 start.go:564] Will wait 60s for crictl version
	I1109 13:29:25.198660  554049 ssh_runner.go:195] Run: which crictl
	I1109 13:29:25.205416  554049 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1109 13:29:25.257021  554049 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1109 13:29:25.257128  554049 ssh_runner.go:195] Run: crio --version
	I1109 13:29:25.294847  554049 ssh_runner.go:195] Run: crio --version
	I1109 13:29:25.335910  554049 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1109 13:29:25.340895  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:25.341471  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:25.341501  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:25.341823  554049 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1109 13:29:25.348236  554049 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:29:25.368735  554049 kubeadm.go:884] updating cluster {Name:addons-640912 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-640912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 13:29:25.368898  554049 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:29:25.368946  554049 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:29:25.416200  554049 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1109 13:29:25.416292  554049 ssh_runner.go:195] Run: which lz4
	I1109 13:29:25.422425  554049 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1109 13:29:25.429188  554049 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1109 13:29:25.429238  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1109 13:29:27.484253  554049 crio.go:462] duration metric: took 2.061869484s to copy over tarball
	I1109 13:29:27.484374  554049 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1109 13:29:29.665537  554049 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.181121034s)
	I1109 13:29:29.665572  554049 crio.go:469] duration metric: took 2.181275636s to extract the tarball
	I1109 13:29:29.665583  554049 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1109 13:29:29.711172  554049 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:29:29.767518  554049 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 13:29:29.767551  554049 cache_images.go:86] Images are preloaded, skipping loading
	I1109 13:29:29.767560  554049 kubeadm.go:935] updating node { 192.168.39.228 8443 v1.34.1 crio true true} ...
	I1109 13:29:29.767658  554049 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-640912 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-640912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 13:29:29.767738  554049 ssh_runner.go:195] Run: crio config
	I1109 13:29:29.827752  554049 cni.go:84] Creating CNI manager for ""
	I1109 13:29:29.827802  554049 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1109 13:29:29.827828  554049 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 13:29:29.827856  554049 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.228 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-640912 NodeName:addons-640912 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 13:29:29.828036  554049 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-640912"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.228"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.228"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 13:29:29.828128  554049 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 13:29:29.842993  554049 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 13:29:29.843074  554049 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 13:29:29.857721  554049 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1109 13:29:29.885131  554049 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 13:29:29.910962  554049 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1109 13:29:29.937168  554049 ssh_runner.go:195] Run: grep 192.168.39.228	control-plane.minikube.internal$ /etc/hosts
	I1109 13:29:29.942897  554049 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:29:29.961825  554049 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:29:30.136776  554049 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:29:30.177152  554049 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912 for IP: 192.168.39.228
	I1109 13:29:30.177200  554049 certs.go:195] generating shared ca certs ...
	I1109 13:29:30.177243  554049 certs.go:227] acquiring lock for ca certs: {Name:mkc766226c1ec8ac0cc61519ae61374bb0aa3b29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:30.177612  554049 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key
	I1109 13:29:30.526469  554049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt ...
	I1109 13:29:30.526517  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt: {Name:mk1e1ec152f9e7533279dd061df1b855d91797d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:30.526783  554049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key ...
	I1109 13:29:30.526817  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key: {Name:mkb474930e06e0f2d9550b3e47f06fa0412d8c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:30.526988  554049 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key
	I1109 13:29:31.103187  554049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.crt ...
	I1109 13:29:31.103229  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.crt: {Name:mkeee8761eaad8a6feacfb3f1772dbd1f57cdfd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:31.103462  554049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key ...
	I1109 13:29:31.103479  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key: {Name:mke1379b4418067ce1a11d365cf664bfd6b63fb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:31.103597  554049 certs.go:257] generating profile certs ...
	I1109 13:29:31.103681  554049 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.key
	I1109 13:29:31.103717  554049 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt with IP's: []
	I1109 13:29:31.393629  554049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt ...
	I1109 13:29:31.393668  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: {Name:mk1aadbe63d88684ddb1deb4c7d25f36cf84bd13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:31.393894  554049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.key ...
	I1109 13:29:31.393913  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.key: {Name:mkb1582a5c9747ad241e1432ddae43398ee47c0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:31.393997  554049 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.key.1735871a
	I1109 13:29:31.394017  554049 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.crt.1735871a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.228]
	I1109 13:29:31.559744  554049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.crt.1735871a ...
	I1109 13:29:31.559782  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.crt.1735871a: {Name:mk1cf5afc9dcb9c29b6fdbc1d8dbda4b8a0ad1cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:31.560029  554049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.key.1735871a ...
	I1109 13:29:31.560045  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.key.1735871a: {Name:mk6eb23d524b1cf83b979febf555dc8a2670dd3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:31.560132  554049 certs.go:382] copying /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.crt.1735871a -> /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.crt
	I1109 13:29:31.560210  554049 certs.go:386] copying /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.key.1735871a -> /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.key
	I1109 13:29:31.560259  554049 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/proxy-client.key
	I1109 13:29:31.560279  554049 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/proxy-client.crt with IP's: []
	I1109 13:29:31.942231  554049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/proxy-client.crt ...
	I1109 13:29:31.942265  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/proxy-client.crt: {Name:mk94f935952bddbb6d98595db3e977b6c297b768 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:31.942468  554049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/proxy-client.key ...
	I1109 13:29:31.942486  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/proxy-client.key: {Name:mk5d96b637c774a9f2904169fcbe11646a6b30aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:31.942692  554049 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca-key.pem (1675 bytes)
	I1109 13:29:31.942732  554049 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem (1082 bytes)
	I1109 13:29:31.942757  554049 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem (1123 bytes)
	I1109 13:29:31.942779  554049 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/key.pem (1679 bytes)
	I1109 13:29:31.943571  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 13:29:31.984717  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 13:29:32.026837  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 13:29:32.064723  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 13:29:32.101911  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1109 13:29:32.137993  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 13:29:32.174693  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 13:29:32.211419  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 13:29:32.251159  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 13:29:32.289056  554049 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 13:29:32.315536  554049 ssh_runner.go:195] Run: openssl version
	I1109 13:29:32.323702  554049 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 13:29:32.341220  554049 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:29:32.348539  554049 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:29:32.348611  554049 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:29:32.357914  554049 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 13:29:32.374131  554049 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 13:29:32.380778  554049 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 13:29:32.380859  554049 kubeadm.go:401] StartCluster: {Name:addons-640912 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-640912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:29:32.380939  554049 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:29:32.381031  554049 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:29:32.437303  554049 cri.go:89] found id: ""
	I1109 13:29:32.437443  554049 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 13:29:32.455469  554049 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 13:29:32.474586  554049 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 13:29:32.490883  554049 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 13:29:32.490932  554049 kubeadm.go:158] found existing configuration files:
	
	I1109 13:29:32.490984  554049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 13:29:32.507029  554049 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 13:29:32.507106  554049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 13:29:32.526992  554049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 13:29:32.542026  554049 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 13:29:32.542094  554049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 13:29:32.557462  554049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 13:29:32.571729  554049 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 13:29:32.571847  554049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 13:29:32.586445  554049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 13:29:32.600460  554049 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 13:29:32.600546  554049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 13:29:32.615463  554049 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1109 13:29:32.797507  554049 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 13:29:45.565254  554049 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1109 13:29:45.565352  554049 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 13:29:45.565451  554049 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 13:29:45.565580  554049 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 13:29:45.565676  554049 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1109 13:29:45.565749  554049 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 13:29:45.567749  554049 out.go:252]   - Generating certificates and keys ...
	I1109 13:29:45.567923  554049 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 13:29:45.568028  554049 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 13:29:45.568143  554049 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 13:29:45.568242  554049 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1109 13:29:45.568335  554049 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1109 13:29:45.568419  554049 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 13:29:45.568504  554049 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 13:29:45.568659  554049 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-640912 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	I1109 13:29:45.568749  554049 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 13:29:45.568968  554049 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-640912 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	I1109 13:29:45.569082  554049 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 13:29:45.569231  554049 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 13:29:45.569297  554049 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 13:29:45.569348  554049 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 13:29:45.569405  554049 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 13:29:45.569456  554049 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1109 13:29:45.569506  554049 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 13:29:45.569621  554049 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 13:29:45.569729  554049 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 13:29:45.569897  554049 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 13:29:45.570022  554049 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 13:29:45.571849  554049 out.go:252]   - Booting up control plane ...
	I1109 13:29:45.572019  554049 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 13:29:45.572139  554049 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 13:29:45.572306  554049 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 13:29:45.572529  554049 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 13:29:45.572725  554049 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1109 13:29:45.572929  554049 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1109 13:29:45.573081  554049 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 13:29:45.573164  554049 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 13:29:45.573346  554049 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1109 13:29:45.573511  554049 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1109 13:29:45.573612  554049 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002203832s
	I1109 13:29:45.573738  554049 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1109 13:29:45.573866  554049 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.228:8443/livez
	I1109 13:29:45.574035  554049 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1109 13:29:45.574163  554049 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1109 13:29:45.574287  554049 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.851777111s
	I1109 13:29:45.574388  554049 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.338951376s
	I1109 13:29:45.574496  554049 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.505486059s
	I1109 13:29:45.574629  554049 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 13:29:45.574811  554049 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 13:29:45.574907  554049 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 13:29:45.575162  554049 kubeadm.go:319] [mark-control-plane] Marking the node addons-640912 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 13:29:45.575281  554049 kubeadm.go:319] [bootstrap-token] Using token: law7ws.rcnk7pdq4fp4bzd0
	I1109 13:29:45.577265  554049 out.go:252]   - Configuring RBAC rules ...
	I1109 13:29:45.577435  554049 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 13:29:45.577562  554049 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 13:29:45.577700  554049 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 13:29:45.577922  554049 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 13:29:45.578078  554049 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 13:29:45.578193  554049 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 13:29:45.578331  554049 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 13:29:45.578397  554049 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1109 13:29:45.578469  554049 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1109 13:29:45.578482  554049 kubeadm.go:319] 
	I1109 13:29:45.578574  554049 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1109 13:29:45.578586  554049 kubeadm.go:319] 
	I1109 13:29:45.578689  554049 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1109 13:29:45.578704  554049 kubeadm.go:319] 
	I1109 13:29:45.578741  554049 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1109 13:29:45.578846  554049 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 13:29:45.578924  554049 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 13:29:45.578942  554049 kubeadm.go:319] 
	I1109 13:29:45.579023  554049 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1109 13:29:45.579033  554049 kubeadm.go:319] 
	I1109 13:29:45.579099  554049 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 13:29:45.579142  554049 kubeadm.go:319] 
	I1109 13:29:45.579228  554049 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1109 13:29:45.579340  554049 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 13:29:45.579492  554049 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 13:29:45.579520  554049 kubeadm.go:319] 
	I1109 13:29:45.579616  554049 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 13:29:45.579730  554049 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1109 13:29:45.579758  554049 kubeadm.go:319] 
	I1109 13:29:45.579905  554049 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token law7ws.rcnk7pdq4fp4bzd0 \
	I1109 13:29:45.580053  554049 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8c9d3f71532339ff1f74fe1baae16b7e7ca1ed75ea1b3aa4741816f874e79d71 \
	I1109 13:29:45.580099  554049 kubeadm.go:319] 	--control-plane 
	I1109 13:29:45.580109  554049 kubeadm.go:319] 
	I1109 13:29:45.580227  554049 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1109 13:29:45.580245  554049 kubeadm.go:319] 
	I1109 13:29:45.580332  554049 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token law7ws.rcnk7pdq4fp4bzd0 \
	I1109 13:29:45.580500  554049 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8c9d3f71532339ff1f74fe1baae16b7e7ca1ed75ea1b3aa4741816f874e79d71 
	I1109 13:29:45.580520  554049 cni.go:84] Creating CNI manager for ""
	I1109 13:29:45.580531  554049 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1109 13:29:45.582621  554049 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1109 13:29:45.584213  554049 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1109 13:29:45.605204  554049 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1109 13:29:45.642398  554049 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 13:29:45.642500  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:45.642500  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-640912 minikube.k8s.io/updated_at=2025_11_09T13_29_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=addons-640912 minikube.k8s.io/primary=true
	I1109 13:29:45.724991  554049 ops.go:34] apiserver oom_adj: -16
	I1109 13:29:45.860983  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:46.361977  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:46.862176  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:47.362111  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:47.861219  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:48.361843  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:48.861251  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:49.362130  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:49.861782  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:50.361731  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:50.861735  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:50.977332  554049 kubeadm.go:1114] duration metric: took 5.33492595s to wait for elevateKubeSystemPrivileges
	I1109 13:29:50.977378  554049 kubeadm.go:403] duration metric: took 18.596524599s to StartCluster
	I1109 13:29:50.977400  554049 settings.go:142] acquiring lock: {Name:mkb59fcf785d78efbba1217c69544ee37b77198f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:50.977564  554049 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-549598/kubeconfig
	I1109 13:29:50.978027  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/kubeconfig: {Name:mka7e7e8d5d1d87facf220110c90862a74355591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:50.978280  554049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 13:29:50.978317  554049 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 13:29:50.978398  554049 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1109 13:29:50.978554  554049 addons.go:70] Setting yakd=true in profile "addons-640912"
	I1109 13:29:50.978574  554049 config.go:182] Loaded profile config "addons-640912": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:29:50.978592  554049 addons.go:70] Setting metrics-server=true in profile "addons-640912"
	I1109 13:29:50.978604  554049 addons.go:239] Setting addon metrics-server=true in "addons-640912"
	I1109 13:29:50.978583  554049 addons.go:70] Setting inspektor-gadget=true in profile "addons-640912"
	I1109 13:29:50.978629  554049 addons.go:70] Setting ingress=true in profile "addons-640912"
	I1109 13:29:50.978638  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.978640  554049 addons.go:70] Setting ingress-dns=true in profile "addons-640912"
	I1109 13:29:50.978642  554049 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-640912"
	I1109 13:29:50.978649  554049 addons.go:239] Setting addon ingress=true in "addons-640912"
	I1109 13:29:50.978587  554049 addons.go:70] Setting default-storageclass=true in profile "addons-640912"
	I1109 13:29:50.978657  554049 addons.go:239] Setting addon ingress-dns=true in "addons-640912"
	I1109 13:29:50.978690  554049 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-640912"
	I1109 13:29:50.978715  554049 addons.go:70] Setting storage-provisioner=true in profile "addons-640912"
	I1109 13:29:50.978728  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.978738  554049 addons.go:239] Setting addon storage-provisioner=true in "addons-640912"
	I1109 13:29:50.978757  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.978772  554049 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-640912"
	I1109 13:29:50.978822  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.978632  554049 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-640912"
	I1109 13:29:50.980356  554049 addons.go:70] Setting volumesnapshots=true in profile "addons-640912"
	I1109 13:29:50.980394  554049 addons.go:239] Setting addon volumesnapshots=true in "addons-640912"
	I1109 13:29:50.980427  554049 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-640912"
	I1109 13:29:50.980439  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.980467  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.980505  554049 addons.go:70] Setting registry=true in profile "addons-640912"
	I1109 13:29:50.980528  554049 addons.go:239] Setting addon registry=true in "addons-640912"
	I1109 13:29:50.980562  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.978694  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.980786  554049 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-640912"
	I1109 13:29:50.980833  554049 addons.go:70] Setting registry-creds=true in profile "addons-640912"
	I1109 13:29:50.980872  554049 addons.go:239] Setting addon registry-creds=true in "addons-640912"
	I1109 13:29:50.980911  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.980979  554049 addons.go:70] Setting gcp-auth=true in profile "addons-640912"
	I1109 13:29:50.981016  554049 mustload.go:66] Loading cluster: addons-640912
	I1109 13:29:50.981255  554049 config.go:182] Loaded profile config "addons-640912": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:29:50.980845  554049 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-640912"
	I1109 13:29:50.981636  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.978582  554049 addons.go:239] Setting addon yakd=true in "addons-640912"
	I1109 13:29:50.981926  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.982378  554049 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-640912"
	I1109 13:29:50.982484  554049 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-640912"
	I1109 13:29:50.978638  554049 addons.go:70] Setting cloud-spanner=true in profile "addons-640912"
	I1109 13:29:50.983108  554049 out.go:179] * Verifying Kubernetes components...
	I1109 13:29:50.983376  554049 addons.go:239] Setting addon cloud-spanner=true in "addons-640912"
	I1109 13:29:50.983433  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.978647  554049 addons.go:239] Setting addon inspektor-gadget=true in "addons-640912"
	I1109 13:29:50.983505  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.983818  554049 addons.go:70] Setting volcano=true in profile "addons-640912"
	I1109 13:29:50.983847  554049 addons.go:239] Setting addon volcano=true in "addons-640912"
	I1109 13:29:50.983888  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.985888  554049 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:29:50.988835  554049 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 13:29:50.988855  554049 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1109 13:29:50.988850  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1109 13:29:50.990206  554049 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 13:29:50.990229  554049 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1109 13:29:50.990256  554049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1109 13:29:50.990263  554049 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1109 13:29:50.990235  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 13:29:50.990626  554049 addons.go:239] Setting addon default-storageclass=true in "addons-640912"
	I1109 13:29:50.990686  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.990970  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.992322  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1109 13:29:50.992399  554049 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1109 13:29:50.992401  554049 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1109 13:29:50.992422  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1109 13:29:50.994043  554049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1109 13:29:50.994224  554049 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1109 13:29:50.994233  554049 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1109 13:29:50.994273  554049 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1109 13:29:50.994288  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1109 13:29:50.994234  554049 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1109 13:29:50.995216  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	W1109 13:29:50.994377  554049 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1109 13:29:50.994419  554049 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-640912"
	I1109 13:29:50.995510  554049 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1109 13:29:50.995521  554049 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1109 13:29:50.995518  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.995532  554049 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1109 13:29:50.995544  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1109 13:29:50.997116  554049 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1109 13:29:50.995622  554049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1109 13:29:50.996314  554049 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1109 13:29:50.996379  554049 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1109 13:29:50.996881  554049 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 13:29:50.997509  554049 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 13:29:50.997520  554049 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1109 13:29:50.997721  554049 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1109 13:29:50.997770  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1109 13:29:50.997196  554049 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1109 13:29:50.997851  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1109 13:29:50.998125  554049 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1109 13:29:50.998204  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1109 13:29:50.998240  554049 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1109 13:29:50.998255  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1109 13:29:50.998809  554049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1109 13:29:51.000211  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1109 13:29:51.000212  554049 out.go:179]   - Using image docker.io/registry:3.0.0
	I1109 13:29:51.000409  554049 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1109 13:29:51.000431  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1109 13:29:51.001876  554049 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1109 13:29:51.001963  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1109 13:29:51.002318  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.003013  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1109 13:29:51.003111  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.004326  554049 out.go:179]   - Using image docker.io/busybox:stable
	I1109 13:29:51.004405  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.004447  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.005229  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.005291  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.005684  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.005724  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1109 13:29:51.006264  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.006874  554049 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1109 13:29:51.007777  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.008155  554049 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1109 13:29:51.008180  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1109 13:29:51.008195  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1109 13:29:51.008364  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.009951  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.009992  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.010700  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.010752  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.010781  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1109 13:29:51.010862  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.011032  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.011751  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.012288  554049 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1109 13:29:51.012399  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.012548  554049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1109 13:29:51.012598  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.012686  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.012719  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.013507  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.013623  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.013651  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.013707  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.014346  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.014367  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.014370  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.014490  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.014521  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.014605  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.015131  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.015563  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.015596  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.015899  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.016519  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.016594  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.016603  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.016624  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.016745  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.016827  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.016931  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.017041  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.017091  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.017453  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.017728  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.017778  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.017836  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.017934  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.017973  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.018186  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.019148  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.020575  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.021218  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.021220  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.021272  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.021523  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.021963  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.021994  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.022184  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	W1109 13:29:51.386846  554049 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57418->192.168.39.228:22: read: connection reset by peer
	I1109 13:29:51.386892  554049 retry.go:31] will retry after 222.983762ms: ssh: handshake failed: read tcp 192.168.39.1:57418->192.168.39.228:22: read: connection reset by peer
	W1109 13:29:51.444433  554049 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57434->192.168.39.228:22: read: connection reset by peer
	I1109 13:29:51.444480  554049 retry.go:31] will retry after 227.572873ms: ssh: handshake failed: read tcp 192.168.39.1:57434->192.168.39.228:22: read: connection reset by peer
	W1109 13:29:51.612303  554049 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57454->192.168.39.228:22: read: connection reset by peer
	I1109 13:29:51.612342  554049 retry.go:31] will retry after 211.681358ms: ssh: handshake failed: read tcp 192.168.39.1:57454->192.168.39.228:22: read: connection reset by peer
	I1109 13:29:52.010077  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 13:29:52.235070  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1109 13:29:52.331852  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 13:29:52.352738  554049 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1109 13:29:52.352773  554049 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1109 13:29:52.395388  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1109 13:29:52.440660  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1109 13:29:52.445961  554049 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1109 13:29:52.446002  554049 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1109 13:29:52.448181  554049 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1109 13:29:52.448236  554049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1109 13:29:52.544737  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1109 13:29:52.551025  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1109 13:29:52.566441  554049 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1109 13:29:52.566471  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1109 13:29:52.632342  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1109 13:29:52.729889  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1109 13:29:53.011917  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1109 13:29:53.259319  554049 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (2.280994349s)
	I1109 13:29:53.259435  554049 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (2.273499892s)
	I1109 13:29:53.259518  554049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 13:29:53.259530  554049 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:29:53.377079  554049 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1109 13:29:53.377125  554049 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1109 13:29:53.410957  554049 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1109 13:29:53.410995  554049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1109 13:29:53.492668  554049 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1109 13:29:53.492714  554049 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1109 13:29:53.541620  554049 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1109 13:29:53.541665  554049 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1109 13:29:53.652096  554049 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1109 13:29:53.652133  554049 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1109 13:29:53.995555  554049 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1109 13:29:53.995587  554049 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1109 13:29:54.033651  554049 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1109 13:29:54.033695  554049 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1109 13:29:54.067822  554049 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1109 13:29:54.067856  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1109 13:29:54.196207  554049 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1109 13:29:54.196244  554049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1109 13:29:54.227433  554049 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1109 13:29:54.227464  554049 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1109 13:29:54.679076  554049 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1109 13:29:54.679121  554049 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1109 13:29:54.696117  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1109 13:29:54.741459  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1109 13:29:54.881208  554049 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1109 13:29:54.881247  554049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1109 13:29:54.915127  554049 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1109 13:29:54.915176  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1109 13:29:55.272351  554049 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1109 13:29:55.272388  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1109 13:29:55.364173  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1109 13:29:55.383308  554049 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1109 13:29:55.383345  554049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1109 13:29:56.059938  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1109 13:29:56.248473  554049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1109 13:29:56.248504  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1109 13:29:57.014690  554049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1109 13:29:57.014726  554049 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1109 13:29:57.519712  554049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1109 13:29:57.519740  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1109 13:29:58.054597  554049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1109 13:29:58.054639  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1109 13:29:58.434364  554049 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1109 13:29:58.438873  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:58.439831  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:58.439910  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:58.440311  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:58.622773  554049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1109 13:29:58.622820  554049 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1109 13:29:59.371356  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1109 13:29:59.505293  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.495157485s)
	I1109 13:29:59.785061  554049 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1109 13:30:00.392747  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (8.157615738s)
	I1109 13:30:00.392753  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.060856374s)
	I1109 13:30:00.392830  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.997405175s)
	I1109 13:30:00.392922  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.952211987s)
	I1109 13:30:00.738340  554049 addons.go:239] Setting addon gcp-auth=true in "addons-640912"
	I1109 13:30:00.738422  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:30:00.741137  554049 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1109 13:30:00.745233  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:30:00.746101  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:30:00.746150  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:30:00.746504  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:30:04.324164  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.779383152s)
	I1109 13:30:04.324212  554049 addons.go:480] Verifying addon ingress=true in "addons-640912"
	I1109 13:30:04.324312  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (11.691928557s)
	I1109 13:30:04.324279  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.773208765s)
	I1109 13:30:04.324397  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (11.594469361s)
	I1109 13:30:04.324471  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.312528576s)
	I1109 13:30:04.324546  554049 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (11.064996826s)
	I1109 13:30:04.324573  554049 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (11.065034447s)
	I1109 13:30:04.324596  554049 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1109 13:30:04.324784  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.628636294s)
	I1109 13:30:04.324825  554049 addons.go:480] Verifying addon metrics-server=true in "addons-640912"
	I1109 13:30:04.324875  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.583378818s)
	I1109 13:30:04.324892  554049 addons.go:480] Verifying addon registry=true in "addons-640912"
	I1109 13:30:04.325127  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.2651527s)
	W1109 13:30:04.325346  554049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1109 13:30:04.325153  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.960820727s)
	I1109 13:30:04.325383  554049 retry.go:31] will retry after 202.022969ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1109 13:30:04.325661  554049 node_ready.go:35] waiting up to 6m0s for node "addons-640912" to be "Ready" ...
	I1109 13:30:04.325983  554049 out.go:179] * Verifying ingress addon...
	I1109 13:30:04.326814  554049 out.go:179] * Verifying registry addon...
	I1109 13:30:04.327420  554049 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-640912 service yakd-dashboard -n yakd-dashboard
	
	I1109 13:30:04.328195  554049 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1109 13:30:04.328903  554049 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1109 13:30:04.421113  554049 node_ready.go:49] node "addons-640912" is "Ready"
	I1109 13:30:04.421170  554049 node_ready.go:38] duration metric: took 95.473426ms for node "addons-640912" to be "Ready" ...
	I1109 13:30:04.421193  554049 api_server.go:52] waiting for apiserver process to appear ...
	I1109 13:30:04.421252  554049 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:30:04.436573  554049 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1109 13:30:04.436601  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:04.437324  554049 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1109 13:30:04.437349  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:04.527734  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1109 13:30:04.850888  554049 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-640912" context rescaled to 1 replicas
	I1109 13:30:04.887833  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:04.891917  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:05.342314  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:05.346335  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:05.863995  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:05.864036  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:06.396694  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:06.402111  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:06.554519  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.183096689s)
	I1109 13:30:06.554582  554049 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-640912"
	I1109 13:30:06.554594  554049 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.813405865s)
	I1109 13:30:06.554623  554049 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.133355636s)
	I1109 13:30:06.554651  554049 api_server.go:72] duration metric: took 15.57629663s to wait for apiserver process to appear ...
	I1109 13:30:06.554661  554049 api_server.go:88] waiting for apiserver healthz status ...
	I1109 13:30:06.554691  554049 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I1109 13:30:06.556403  554049 out.go:179] * Verifying csi-hostpath-driver addon...
	I1109 13:30:06.556401  554049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1109 13:30:06.559165  554049 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1109 13:30:06.559901  554049 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1109 13:30:06.560844  554049 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1109 13:30:06.560881  554049 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1109 13:30:06.598285  554049 api_server.go:279] https://192.168.39.228:8443/healthz returned 200:
	ok
	I1109 13:30:06.612843  554049 api_server.go:141] control plane version: v1.34.1
	I1109 13:30:06.612893  554049 api_server.go:131] duration metric: took 58.222701ms to wait for apiserver health ...
	I1109 13:30:06.612928  554049 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 13:30:06.645111  554049 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1109 13:30:06.645145  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:06.677129  554049 system_pods.go:59] 20 kube-system pods found
	I1109 13:30:06.677261  554049 system_pods.go:61] "amd-gpu-device-plugin-2tv7p" [0019249b-f40e-4609-b592-f9fcc146c80a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1109 13:30:06.677278  554049 system_pods.go:61] "coredns-66bc5c9577-s9xxb" [e5d6eb11-cd0f-4ef0-b1ae-938e4c32f04b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:30:06.677293  554049 system_pods.go:61] "coredns-66bc5c9577-xtt8z" [4c0e27e8-3047-4a17-9435-f9185e872696] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:30:06.677304  554049 system_pods.go:61] "csi-hostpath-attacher-0" [d822fdee-fb25-4634-83b9-e9da33b6b333] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1109 13:30:06.677316  554049 system_pods.go:61] "csi-hostpath-resizer-0" [3d5fea9b-7c9b-4665-ac68-5e296d36729f] Pending
	I1109 13:30:06.677326  554049 system_pods.go:61] "csi-hostpathplugin-9dzzw" [ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1109 13:30:06.677338  554049 system_pods.go:61] "etcd-addons-640912" [be48210e-3e9d-4a68-b5a4-80f4a26fa4be] Running
	I1109 13:30:06.677344  554049 system_pods.go:61] "kube-apiserver-addons-640912" [066566b1-566d-491b-8be7-e1bf16b2ecb1] Running
	I1109 13:30:06.677349  554049 system_pods.go:61] "kube-controller-manager-addons-640912" [daaf7f94-d2de-42ec-8cd7-37bae6ec43ad] Running
	I1109 13:30:06.677359  554049 system_pods.go:61] "kube-ingress-dns-minikube" [fa72b9e2-abd1-49dd-b3cb-155aafc6e442] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 13:30:06.677369  554049 system_pods.go:61] "kube-proxy-8hbf4" [97813667-ffbc-4b8a-a122-3fa531d57ee3] Running
	I1109 13:30:06.677376  554049 system_pods.go:61] "kube-scheduler-addons-640912" [051715db-03e5-4cae-9b74-60fe58511b6b] Running
	I1109 13:30:06.677387  554049 system_pods.go:61] "metrics-server-85b7d694d7-lfl7n" [d4722f0c-b942-4bf8-88e4-a8d5b09f6fdf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 13:30:06.677399  554049 system_pods.go:61] "nvidia-device-plugin-daemonset-vhlxq" [b849210d-a4dd-4bfc-ad75-3bf99c214e37] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1109 13:30:06.677407  554049 system_pods.go:61] "registry-6b586f9694-wm87r" [59b2fc4f-c09e-47b3-ac30-a3dce9d1d9b1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1109 13:30:06.677419  554049 system_pods.go:61] "registry-creds-764b6fb674-z2sqx" [8e9dea64-3610-47e1-9a4d-1f13f275439e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:30:06.677434  554049 system_pods.go:61] "registry-proxy-d6q28" [bc34c737-250f-4084-859d-d21c43a619d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1109 13:30:06.677445  554049 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pgl85" [d9a227fb-a833-4bc3-928b-eacf5e94bd0f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:06.677474  554049 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qk9k2" [548acda2-9430-4b25-a3a8-09e0a17aa95f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:06.677489  554049 system_pods.go:61] "storage-provisioner" [59b766d9-adb7-4e0c-bc68-9d9cf8fbbdba] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 13:30:06.677500  554049 system_pods.go:74] duration metric: took 64.564101ms to wait for pod list to return data ...
	I1109 13:30:06.677515  554049 default_sa.go:34] waiting for default service account to be created ...
	I1109 13:30:06.698871  554049 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1109 13:30:06.698911  554049 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1109 13:30:06.723783  554049 default_sa.go:45] found service account: "default"
	I1109 13:30:06.723870  554049 default_sa.go:55] duration metric: took 46.344804ms for default service account to be created ...
	I1109 13:30:06.723888  554049 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 13:30:06.784361  554049 system_pods.go:86] 20 kube-system pods found
	I1109 13:30:06.784424  554049 system_pods.go:89] "amd-gpu-device-plugin-2tv7p" [0019249b-f40e-4609-b592-f9fcc146c80a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1109 13:30:06.784438  554049 system_pods.go:89] "coredns-66bc5c9577-s9xxb" [e5d6eb11-cd0f-4ef0-b1ae-938e4c32f04b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:30:06.784456  554049 system_pods.go:89] "coredns-66bc5c9577-xtt8z" [4c0e27e8-3047-4a17-9435-f9185e872696] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:30:06.784466  554049 system_pods.go:89] "csi-hostpath-attacher-0" [d822fdee-fb25-4634-83b9-e9da33b6b333] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1109 13:30:06.784474  554049 system_pods.go:89] "csi-hostpath-resizer-0" [3d5fea9b-7c9b-4665-ac68-5e296d36729f] Pending
	I1109 13:30:06.784485  554049 system_pods.go:89] "csi-hostpathplugin-9dzzw" [ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1109 13:30:06.784495  554049 system_pods.go:89] "etcd-addons-640912" [be48210e-3e9d-4a68-b5a4-80f4a26fa4be] Running
	I1109 13:30:06.784616  554049 system_pods.go:89] "kube-apiserver-addons-640912" [066566b1-566d-491b-8be7-e1bf16b2ecb1] Running
	I1109 13:30:06.784630  554049 system_pods.go:89] "kube-controller-manager-addons-640912" [daaf7f94-d2de-42ec-8cd7-37bae6ec43ad] Running
	I1109 13:30:06.784642  554049 system_pods.go:89] "kube-ingress-dns-minikube" [fa72b9e2-abd1-49dd-b3cb-155aafc6e442] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 13:30:06.784654  554049 system_pods.go:89] "kube-proxy-8hbf4" [97813667-ffbc-4b8a-a122-3fa531d57ee3] Running
	I1109 13:30:06.784663  554049 system_pods.go:89] "kube-scheduler-addons-640912" [051715db-03e5-4cae-9b74-60fe58511b6b] Running
	I1109 13:30:06.784714  554049 system_pods.go:89] "metrics-server-85b7d694d7-lfl7n" [d4722f0c-b942-4bf8-88e4-a8d5b09f6fdf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 13:30:06.784734  554049 system_pods.go:89] "nvidia-device-plugin-daemonset-vhlxq" [b849210d-a4dd-4bfc-ad75-3bf99c214e37] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1109 13:30:06.784749  554049 system_pods.go:89] "registry-6b586f9694-wm87r" [59b2fc4f-c09e-47b3-ac30-a3dce9d1d9b1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1109 13:30:06.784761  554049 system_pods.go:89] "registry-creds-764b6fb674-z2sqx" [8e9dea64-3610-47e1-9a4d-1f13f275439e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:30:06.784769  554049 system_pods.go:89] "registry-proxy-d6q28" [bc34c737-250f-4084-859d-d21c43a619d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1109 13:30:06.784779  554049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pgl85" [d9a227fb-a833-4bc3-928b-eacf5e94bd0f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:06.784787  554049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qk9k2" [548acda2-9430-4b25-a3a8-09e0a17aa95f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:06.784813  554049 system_pods.go:89] "storage-provisioner" [59b766d9-adb7-4e0c-bc68-9d9cf8fbbdba] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 13:30:06.784835  554049 system_pods.go:126] duration metric: took 60.936237ms to wait for k8s-apps to be running ...
	I1109 13:30:06.784852  554049 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 13:30:06.784957  554049 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 13:30:06.790756  554049 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1109 13:30:06.790815  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1109 13:30:06.855567  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:06.856076  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:06.996894  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1109 13:30:07.069630  554049 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1109 13:30:07.069669  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:07.357817  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:07.358129  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:07.585714  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:07.837469  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:07.842429  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:08.001868  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.47407204s)
	I1109 13:30:08.001935  554049 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.216918597s)
	I1109 13:30:08.001974  554049 system_svc.go:56] duration metric: took 1.217116528s WaitForService to wait for kubelet
	I1109 13:30:08.001988  554049 kubeadm.go:587] duration metric: took 17.023632052s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 13:30:08.002022  554049 node_conditions.go:102] verifying NodePressure condition ...
	I1109 13:30:08.013233  554049 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1109 13:30:08.013279  554049 node_conditions.go:123] node cpu capacity is 2
	I1109 13:30:08.013321  554049 node_conditions.go:105] duration metric: took 11.288216ms to run NodePressure ...
	I1109 13:30:08.013341  554049 start.go:242] waiting for startup goroutines ...
	I1109 13:30:08.072285  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:08.333086  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:08.336474  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:08.572226  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:08.887336  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.890385858s)
	I1109 13:30:08.888525  554049 addons.go:480] Verifying addon gcp-auth=true in "addons-640912"
	I1109 13:30:08.890860  554049 out.go:179] * Verifying gcp-auth addon...
	I1109 13:30:08.892713  554049 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1109 13:30:08.939244  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:08.939347  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:08.991310  554049 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1109 13:30:08.991337  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:09.098338  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:09.344858  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:09.345304  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:09.399368  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:09.570285  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:09.838385  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:09.840384  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:09.898869  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:10.065083  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:10.334202  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:10.334309  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:10.401950  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:10.569284  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:10.836515  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:10.838899  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:10.896313  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:11.067129  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:11.339416  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:11.340743  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:11.402448  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:11.566253  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:11.837985  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:11.838020  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:11.898902  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:12.066368  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:12.337501  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:12.338519  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:12.399240  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:12.571326  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:12.832263  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:12.838277  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:12.897716  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:13.073975  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:13.345785  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:13.348013  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:13.397374  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:13.564325  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:13.837325  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:13.843684  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:13.902254  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:14.068483  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:14.335320  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:14.338051  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:14.396277  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:14.565373  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:14.834165  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:14.834467  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:14.897445  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:15.064757  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:15.333021  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:15.333719  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:15.397830  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:15.566785  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:15.835560  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:15.838276  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:15.900642  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:16.067501  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:16.337462  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:16.337641  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:16.398587  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:16.566906  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:16.834191  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:16.834422  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:16.897896  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:17.066472  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:17.336985  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:17.337337  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:17.399227  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:17.565260  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:17.836508  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:17.837830  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:17.897337  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:18.065001  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:18.332999  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:18.335394  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:18.402571  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:18.564851  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:18.840456  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:18.843062  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:18.899589  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:19.068832  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:19.339870  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:19.341559  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:19.399386  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:19.587728  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:20.102869  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:20.102915  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:20.104530  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:20.104680  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:20.336692  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:20.336706  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:20.436134  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:20.563604  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:20.837295  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:20.843051  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:20.936258  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:21.065172  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:21.334067  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:21.335105  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:21.396790  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:21.564002  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:21.835247  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:21.835561  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:21.898139  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:22.070927  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:22.334447  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:22.334961  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:22.396866  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:22.567180  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:22.840032  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:22.840068  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:22.896778  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:23.071532  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:23.339919  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:23.340496  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:23.397236  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:23.566063  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:23.839678  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:23.841282  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:23.901245  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:24.071668  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:24.334636  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:24.335846  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:24.398620  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:24.567631  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:24.836032  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:24.836151  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:24.935042  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:25.065721  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:25.336610  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:25.337364  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:25.400426  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:25.566021  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:25.836480  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:25.838214  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:25.902147  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:26.071427  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:26.338573  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:26.338582  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:26.398771  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:26.565358  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:26.836720  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:26.840552  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:26.901096  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:27.067504  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:27.339750  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:27.341731  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:27.402242  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:27.569891  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:27.833392  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:27.833537  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:27.906589  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:28.065108  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:28.337155  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:28.337297  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:28.397195  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:28.566495  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:28.904921  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:28.907409  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:28.907434  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:29.072857  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:29.334467  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:29.336353  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:29.399920  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:29.566093  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:29.837017  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:29.840579  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:29.902450  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:30.065577  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:30.719201  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:30.724919  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:30.724939  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:30.724986  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:30.833958  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:30.834194  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:30.900339  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:31.065316  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:31.333087  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:31.333171  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:31.398332  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:31.564881  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:31.833924  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:31.837095  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:31.897424  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:32.069730  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:32.337945  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:32.340042  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:32.401234  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:32.567187  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:32.843640  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:32.847045  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:32.898376  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:33.069348  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:33.334614  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:33.339537  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:33.398429  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:33.566402  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:34.077754  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:34.078072  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:34.078683  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:34.079618  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:34.334588  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:34.337189  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:34.397855  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:34.572190  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:34.849654  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:34.849861  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:34.896948  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:35.074479  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:35.348183  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:35.356209  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:35.411951  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:35.570590  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:35.845515  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:35.845555  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:35.905142  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:36.071389  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:36.338701  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:36.340912  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:36.400596  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:36.568710  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:36.911585  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:36.915424  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:36.916949  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:37.067760  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:37.336355  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:37.339107  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:37.398618  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:37.569194  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:37.845063  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:37.845916  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:37.899000  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:38.067362  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:38.334562  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:38.336207  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:38.400168  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:38.571573  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:38.973809  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:38.974144  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:38.974151  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:39.068129  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:39.333195  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:39.335360  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:39.397654  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:39.564893  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:39.833320  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:39.839558  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:39.898484  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:40.065477  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:40.340552  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:40.341676  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:40.397951  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:40.568140  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:40.845076  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:40.845487  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:40.898753  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:41.071899  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:41.346589  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:41.359208  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:41.403903  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:41.571002  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:41.833974  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:41.837788  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:41.898684  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:42.069463  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:42.335582  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:42.338032  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:42.398193  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:42.565475  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:42.835535  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:42.837038  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:42.937572  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:43.073282  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:43.339090  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:43.339461  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:43.396901  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:43.586382  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:43.838864  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:43.843579  554049 kapi.go:107] duration metric: took 39.514673062s to wait for kubernetes.io/minikube-addons=registry ...
	I1109 13:30:43.905634  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:44.064934  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:44.332975  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:44.396420  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:44.571769  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:44.833998  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:44.897776  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:45.068095  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:45.344379  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:45.402752  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:45.574628  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:45.837165  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:45.899358  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:46.067886  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:46.335065  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:46.403112  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:46.577103  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:46.839115  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:46.896120  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:47.076119  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:47.350771  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:47.401893  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:47.571338  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:47.837062  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:47.896673  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:48.066817  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:48.337456  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:48.398614  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:48.565456  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:48.833611  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:48.897408  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:49.064823  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:49.335724  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:49.408948  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:49.565312  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:49.833445  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:49.898385  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:50.064095  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:50.334339  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:50.397598  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:50.569309  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:50.836332  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:50.898692  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:51.066221  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:51.480743  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:51.480846  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:51.568243  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:51.833039  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:51.933871  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:52.065619  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:52.335123  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:52.396946  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:52.566374  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:52.864538  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:52.956580  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:53.066131  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:53.340918  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:53.397918  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:53.570472  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:53.832824  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:53.899448  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:54.065472  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:54.332326  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:54.397534  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:54.568817  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:54.832947  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:54.901046  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:55.064454  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:55.335531  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:55.399529  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:55.569216  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:55.838545  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:55.905412  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:56.067458  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:56.334763  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:56.402225  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:56.768475  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:56.835262  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:56.907870  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:57.067772  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:57.339379  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:57.439713  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:57.573371  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:57.839441  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:57.908300  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:58.068309  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:58.338714  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:58.401258  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:58.565431  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:58.832874  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:58.897895  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:59.076776  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:59.332886  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:59.401336  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:59.572413  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:59.836884  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:59.935930  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:00.205382  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:00.341292  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:00.396631  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:00.568505  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:00.837424  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:00.929421  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:01.069724  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:01.335835  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:01.400290  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:01.564385  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:01.833209  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:01.898880  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:02.067659  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:02.333957  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:02.401527  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:02.573124  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:02.843273  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:02.946887  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:03.068597  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:03.336581  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:03.399764  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:03.567632  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:04.070184  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:04.071224  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:04.075196  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:04.337446  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:04.437852  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:04.566623  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:04.849898  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:04.946693  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:05.069001  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:05.335428  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:05.401410  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:05.566746  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:05.850306  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:05.910812  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:06.073522  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:06.350358  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:06.398770  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:06.570578  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:06.835835  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:06.937150  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:07.070212  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:07.342676  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:07.441121  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:07.575162  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:07.843325  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:07.898217  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:08.069896  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:08.336282  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:08.436654  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:08.572085  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:08.836872  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:08.900081  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:09.066104  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:09.331853  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:09.400057  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:09.564879  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:09.873005  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:09.897692  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:10.066725  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:10.339369  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:10.399557  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:10.572087  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:10.838743  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:10.897458  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:11.067721  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:11.335546  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:11.397389  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:11.566619  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:11.839606  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:11.902886  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:12.068399  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:12.332049  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:12.401492  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:12.565507  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:12.835898  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:12.907128  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:13.066925  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:13.338046  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:13.400870  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:13.563107  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:13.834771  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:13.937396  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:14.068487  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:14.332717  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:14.399661  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:14.570753  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:14.833332  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:14.897424  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:15.204038  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:15.339763  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:15.397926  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:15.568164  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:15.836548  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:15.899073  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:16.066864  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:16.333466  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:16.397331  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:16.567861  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:16.833409  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:16.897614  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:17.070130  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:17.337400  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:17.401374  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:17.565736  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:17.841910  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:17.898786  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:18.070624  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:18.333680  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:18.401244  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:18.571631  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:18.833883  554049 kapi.go:107] duration metric: took 1m14.505685559s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1109 13:31:18.898024  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:19.073709  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:19.402477  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:19.565307  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:19.904075  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:20.068726  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:20.398760  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:20.565697  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:20.896731  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:21.064644  554049 kapi.go:107] duration metric: took 1m14.504756398s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1109 13:31:21.397137  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:21.897734  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:22.398588  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:23.010336  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:23.397591  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:23.902542  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:24.399075  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:24.897122  554049 kapi.go:107] duration metric: took 1m16.004408046s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1109 13:31:24.898930  554049 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-640912 cluster.
	I1109 13:31:24.900363  554049 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1109 13:31:24.901752  554049 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1109 13:31:24.903272  554049 out.go:179] * Enabled addons: storage-provisioner, inspektor-gadget, nvidia-device-plugin, registry-creds, default-storageclass, amd-gpu-device-plugin, cloud-spanner, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1109 13:31:24.904711  554049 addons.go:515] duration metric: took 1m33.926303204s for enable addons: enabled=[storage-provisioner inspektor-gadget nvidia-device-plugin registry-creds default-storageclass amd-gpu-device-plugin cloud-spanner ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1109 13:31:24.904783  554049 start.go:247] waiting for cluster config update ...
	I1109 13:31:24.904829  554049 start.go:256] writing updated cluster config ...
	I1109 13:31:24.905185  554049 ssh_runner.go:195] Run: rm -f paused
	I1109 13:31:24.913730  554049 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 13:31:24.921584  554049 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xtt8z" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:24.930823  554049 pod_ready.go:94] pod "coredns-66bc5c9577-xtt8z" is "Ready"
	I1109 13:31:24.930856  554049 pod_ready.go:86] duration metric: took 9.238515ms for pod "coredns-66bc5c9577-xtt8z" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:24.935635  554049 pod_ready.go:83] waiting for pod "etcd-addons-640912" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:24.945855  554049 pod_ready.go:94] pod "etcd-addons-640912" is "Ready"
	I1109 13:31:24.945886  554049 pod_ready.go:86] duration metric: took 10.21877ms for pod "etcd-addons-640912" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:24.949503  554049 pod_ready.go:83] waiting for pod "kube-apiserver-addons-640912" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:24.957508  554049 pod_ready.go:94] pod "kube-apiserver-addons-640912" is "Ready"
	I1109 13:31:24.957542  554049 pod_ready.go:86] duration metric: took 7.99802ms for pod "kube-apiserver-addons-640912" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:24.967022  554049 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-640912" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:25.321696  554049 pod_ready.go:94] pod "kube-controller-manager-addons-640912" is "Ready"
	I1109 13:31:25.321729  554049 pod_ready.go:86] duration metric: took 354.672523ms for pod "kube-controller-manager-addons-640912" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:25.518911  554049 pod_ready.go:83] waiting for pod "kube-proxy-8hbf4" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:25.924834  554049 pod_ready.go:94] pod "kube-proxy-8hbf4" is "Ready"
	I1109 13:31:25.924867  554049 pod_ready.go:86] duration metric: took 405.924687ms for pod "kube-proxy-8hbf4" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:26.125658  554049 pod_ready.go:83] waiting for pod "kube-scheduler-addons-640912" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:26.520634  554049 pod_ready.go:94] pod "kube-scheduler-addons-640912" is "Ready"
	I1109 13:31:26.520674  554049 pod_ready.go:86] duration metric: took 394.982788ms for pod "kube-scheduler-addons-640912" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:26.520688  554049 pod_ready.go:40] duration metric: took 1.606902329s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 13:31:26.575762  554049 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 13:31:26.577333  554049 out.go:179] * Done! kubectl is now configured to use "addons-640912" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 13:40:17 addons-640912 crio[808]: time="2025-11-09 13:40:17.356098966Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762695617356064134,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:510745,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=30e85880-f334-472f-a12b-febd5807e391 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:40:17 addons-640912 crio[808]: time="2025-11-09 13:40:17.357416403Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec47d088-89cb-4aa8-8910-f19475e7347d name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:40:17 addons-640912 crio[808]: time="2025-11-09 13:40:17.357689025Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec47d088-89cb-4aa8-8910-f19475e7347d name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:40:17 addons-640912 crio[808]: time="2025-11-09 13:40:17.358550409Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eacf871a61d3462bce510b8a7be4d66a24fdc0a53616b1f2526147ab839b5856,PodSandboxId:33d3042607b18515964bf61cd134e3070fa66eff00570ae2814c08265872d03b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1762695090213216833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 495cc12a-d51f-43be-a567-96a5b4fad03a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7f113792ee1aeacca2bc95d2857de71fa6874b3ee951659efa28320a524532,PodSandboxId:1b24a0719053d6cafc5b1179a4660b7383b3ad3fee86821846956848a958c4f6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1762695078417836968,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-8j7xf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 94b07f23-7caa-4ac1-8abc-174660a2f7a4,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5a862686cb4d2c0e54448fa8a8311708c3ae04778fab3490fa67801913b4a055,PodSandboxId:00a46d438634f1c852fdbf4f05d1491e38e2c51de289efbd8dd87e8acae44088,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a
7bf2,State:CONTAINER_EXITED,CreatedAt:1762695057053274602,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7kdd8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9535a584-09d0-470c-bdca-f8b70a29fe14,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e9e9c4d34d47e7d5a67e92ca9d8eb33ab0fd0ca3ffc05476e8c6342d0a0e7e,PodSandboxId:a3f82e39fba77bdb98d98684962717a0ca32dec2da0c643b2022e7041b41dcd1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba
112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1762695054467740038,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-kj7f9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a16c39e-0afd-467d-9a68-c565ad3f14d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69fe297c1b50ac2bcfff0e0f2e8357c77f8446eda32f448474242c45e7beef16,PodSandboxId:57ab048400abbbc313169bbbd7ac0890f37bd48587fbee368b27c41aff728264,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1762695034556022840,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72b9e2-abd1-49dd-b3cb-155aafc6e442,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfab18621429e6c41f473ab17e6f86a03dc38ea44f3a31f18af4b9bbfc4c4874,PodSandboxId:4d885cc41b56cb4c4a8b9e39ac9027f00f4b68b9e2c742ea8006768f27f47e22,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha25
6:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1762695014619786271,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-2tv7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0019249b-f40e-4609-b592-f9fcc146c80a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb6f2c7163352aca0e271bcb0b20322b3ac5af9c07044f0d2523de5bd207ba1,PodSandboxId:346d7ee8b9728fdb09d0b5fb01ac8c855b8a4ae22b2702c9a4b14f42f894a970,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:
6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762695002979098849,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59b766d9-adb7-4e0c-bc68-9d9cf8fbbdba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdc72298c5067f27022c29a3fba2313bef279aa1262921b54b2abfeb1bcfd05,PodSandboxId:f734f4ea6404b3851d49fa04e9e7d50d1b5a1dd1811142e147a667e58d9e5c61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b19
0596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762694992805161180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xtt8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c0e27e8-3047-4a17-9435-f9185e872696,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d0daf4cf92a3d52b6f1d92f9a1bfeeac5d7a7f31da83e4fb7df39268b3e4ea5,PodSandboxId:28544be4ccc8d274ee0f1ce693a4962e7eb2818d5bbf182e93a9e060de954441,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762694991896517056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8hbf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97813667-ffbc-4b8a-a122-3fa531d57ee3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1939a4061bbfb63f842de0036168e14c4e2c4e170bc480e3de75bc49ea789293,PodSandboxId:8461abff35ed3e424ad88b8d770b8fd6f1d6ca8979d38ab40c0900b1f9daa2aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762694978374008451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f2f0cef7cfff2538acb5ffb3152000c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f516d00cd42560c02c8218477c1f740c22bdc3e04d80a67197f40634f93e478e,PodSandboxId:9d5b3d3ae012e6b1f760709c59c955b1738d0410b8a917c420fa67cd1ad2af07,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762694978316917120,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50977dcfe4ea6e6a61a3e7cf80dace1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381
,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5312ba3c9de673540c31b087b548f07f9144cbe046b292ca5260fa3a2418c7,PodSandboxId:8cb548decbe810fcc34c2b7b41b0518f383ba44d30236dad818bfd86a008c2a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762694978350105060,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4ac15728fbe3a146e056bd33fb08144,},Annotations:map[string]string{io.kubern
etes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5f31d63b316b6b4fceaf2cd37baa19b18c8dd0332189d49ddc503733e9f8479,PodSandboxId:82cda88284e7044d38c14febb7d1ee51977ecbd36b41815ac224232e7fb2f5bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762694978329597383,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-640912,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 40db61fb06568701553ada1b7a8540a0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec47d088-89cb-4aa8-8910-f19475e7347d name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:40:17 addons-640912 crio[808]: time="2025-11-09 13:40:17.407392864Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3571f879-0438-48c8-b190-5c01296607ce name=/runtime.v1.RuntimeService/Version
	Nov 09 13:40:17 addons-640912 crio[808]: time="2025-11-09 13:40:17.407500666Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3571f879-0438-48c8-b190-5c01296607ce name=/runtime.v1.RuntimeService/Version
	Nov 09 13:40:17 addons-640912 crio[808]: time="2025-11-09 13:40:17.409530905Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2788a52-4a29-4844-b00b-50cded22c490 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:40:17 addons-640912 crio[808]: time="2025-11-09 13:40:17.411631588Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762695617411554347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:510745,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2788a52-4a29-4844-b00b-50cded22c490 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:40:17 addons-640912 crio[808]: time="2025-11-09 13:40:17.412831285Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e40b06f-18bd-4936-845f-a5a6b67bf456 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:40:17 addons-640912 crio[808]: time="2025-11-09 13:40:17.413000714Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e40b06f-18bd-4936-845f-a5a6b67bf456 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:40:17 addons-640912 crio[808]: time="2025-11-09 13:40:17.413301378Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eacf871a61d3462bce510b8a7be4d66a24fdc0a53616b1f2526147ab839b5856,PodSandboxId:33d3042607b18515964bf61cd134e3070fa66eff00570ae2814c08265872d03b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1762695090213216833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 495cc12a-d51f-43be-a567-96a5b4fad03a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7f113792ee1aeacca2bc95d2857de71fa6874b3ee951659efa28320a524532,PodSandboxId:1b24a0719053d6cafc5b1179a4660b7383b3ad3fee86821846956848a958c4f6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1762695078417836968,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-8j7xf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 94b07f23-7caa-4ac1-8abc-174660a2f7a4,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5a862686cb4d2c0e54448fa8a8311708c3ae04778fab3490fa67801913b4a055,PodSandboxId:00a46d438634f1c852fdbf4f05d1491e38e2c51de289efbd8dd87e8acae44088,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a
7bf2,State:CONTAINER_EXITED,CreatedAt:1762695057053274602,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7kdd8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9535a584-09d0-470c-bdca-f8b70a29fe14,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e9e9c4d34d47e7d5a67e92ca9d8eb33ab0fd0ca3ffc05476e8c6342d0a0e7e,PodSandboxId:a3f82e39fba77bdb98d98684962717a0ca32dec2da0c643b2022e7041b41dcd1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba
112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1762695054467740038,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-kj7f9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a16c39e-0afd-467d-9a68-c565ad3f14d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69fe297c1b50ac2bcfff0e0f2e8357c77f8446eda32f448474242c45e7beef16,PodSandboxId:57ab048400abbbc313169bbbd7ac0890f37bd48587fbee368b27c41aff728264,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1762695034556022840,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72b9e2-abd1-49dd-b3cb-155aafc6e442,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfab18621429e6c41f473ab17e6f86a03dc38ea44f3a31f18af4b9bbfc4c4874,PodSandboxId:4d885cc41b56cb4c4a8b9e39ac9027f00f4b68b9e2c742ea8006768f27f47e22,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha25
6:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1762695014619786271,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-2tv7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0019249b-f40e-4609-b592-f9fcc146c80a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb6f2c7163352aca0e271bcb0b20322b3ac5af9c07044f0d2523de5bd207ba1,PodSandboxId:346d7ee8b9728fdb09d0b5fb01ac8c855b8a4ae22b2702c9a4b14f42f894a970,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:
6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762695002979098849,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59b766d9-adb7-4e0c-bc68-9d9cf8fbbdba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdc72298c5067f27022c29a3fba2313bef279aa1262921b54b2abfeb1bcfd05,PodSandboxId:f734f4ea6404b3851d49fa04e9e7d50d1b5a1dd1811142e147a667e58d9e5c61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b19
0596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762694992805161180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xtt8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c0e27e8-3047-4a17-9435-f9185e872696,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d0daf4cf92a3d52b6f1d92f9a1bfeeac5d7a7f31da83e4fb7df39268b3e4ea5,PodSandboxId:28544be4ccc8d274ee0f1ce693a4962e7eb2818d5bbf182e93a9e060de954441,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762694991896517056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8hbf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97813667-ffbc-4b8a-a122-3fa531d57ee3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1939a4061bbfb63f842de0036168e14c4e2c4e170bc480e3de75bc49ea789293,PodSandboxId:8461abff35ed3e424ad88b8d770b8fd6f1d6ca8979d38ab40c0900b1f9daa2aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762694978374008451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f2f0cef7cfff2538acb5ffb3152000c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f516d00cd42560c02c8218477c1f740c22bdc3e04d80a67197f40634f93e478e,PodSandboxId:9d5b3d3ae012e6b1f760709c59c955b1738d0410b8a917c420fa67cd1ad2af07,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762694978316917120,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50977dcfe4ea6e6a61a3e7cf80dace1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381
,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5312ba3c9de673540c31b087b548f07f9144cbe046b292ca5260fa3a2418c7,PodSandboxId:8cb548decbe810fcc34c2b7b41b0518f383ba44d30236dad818bfd86a008c2a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762694978350105060,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4ac15728fbe3a146e056bd33fb08144,},Annotations:map[string]string{io.kubern
etes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5f31d63b316b6b4fceaf2cd37baa19b18c8dd0332189d49ddc503733e9f8479,PodSandboxId:82cda88284e7044d38c14febb7d1ee51977ecbd36b41815ac224232e7fb2f5bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762694978329597383,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-640912,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 40db61fb06568701553ada1b7a8540a0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8e40b06f-18bd-4936-845f-a5a6b67bf456 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:40:17 addons-640912 crio[808]: time="2025-11-09 13:40:17.460059590Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a91a882d-7503-48dd-bf0f-415dea23e529 name=/runtime.v1.RuntimeService/Version
	Nov 09 13:40:17 addons-640912 crio[808]: time="2025-11-09 13:40:17.460142277Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a91a882d-7503-48dd-bf0f-415dea23e529 name=/runtime.v1.RuntimeService/Version
	Nov 09 13:40:17 addons-640912 crio[808]: time="2025-11-09 13:40:17.463816012Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64dd4ea5-f563-4518-b7ac-704f7ec68c6f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:40:17 addons-640912 crio[808]: time="2025-11-09 13:40:17.466644841Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762695617466572645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:510745,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64dd4ea5-f563-4518-b7ac-704f7ec68c6f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:40:17 addons-640912 crio[808]: time="2025-11-09 13:40:17.467571441Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5daba831-e73d-4910-b41d-be7da103456f name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:40:17 addons-640912 crio[808]: time="2025-11-09 13:40:17.467805251Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5daba831-e73d-4910-b41d-be7da103456f name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:40:17 addons-640912 crio[808]: time="2025-11-09 13:40:17.468807416Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eacf871a61d3462bce510b8a7be4d66a24fdc0a53616b1f2526147ab839b5856,PodSandboxId:33d3042607b18515964bf61cd134e3070fa66eff00570ae2814c08265872d03b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1762695090213216833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 495cc12a-d51f-43be-a567-96a5b4fad03a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7f113792ee1aeacca2bc95d2857de71fa6874b3ee951659efa28320a524532,PodSandboxId:1b24a0719053d6cafc5b1179a4660b7383b3ad3fee86821846956848a958c4f6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1762695078417836968,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-8j7xf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 94b07f23-7caa-4ac1-8abc-174660a2f7a4,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5a862686cb4d2c0e54448fa8a8311708c3ae04778fab3490fa67801913b4a055,PodSandboxId:00a46d438634f1c852fdbf4f05d1491e38e2c51de289efbd8dd87e8acae44088,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a
7bf2,State:CONTAINER_EXITED,CreatedAt:1762695057053274602,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7kdd8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9535a584-09d0-470c-bdca-f8b70a29fe14,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e9e9c4d34d47e7d5a67e92ca9d8eb33ab0fd0ca3ffc05476e8c6342d0a0e7e,PodSandboxId:a3f82e39fba77bdb98d98684962717a0ca32dec2da0c643b2022e7041b41dcd1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba
112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1762695054467740038,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-kj7f9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a16c39e-0afd-467d-9a68-c565ad3f14d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69fe297c1b50ac2bcfff0e0f2e8357c77f8446eda32f448474242c45e7beef16,PodSandboxId:57ab048400abbbc313169bbbd7ac0890f37bd48587fbee368b27c41aff728264,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1762695034556022840,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72b9e2-abd1-49dd-b3cb-155aafc6e442,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfab18621429e6c41f473ab17e6f86a03dc38ea44f3a31f18af4b9bbfc4c4874,PodSandboxId:4d885cc41b56cb4c4a8b9e39ac9027f00f4b68b9e2c742ea8006768f27f47e22,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha25
6:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1762695014619786271,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-2tv7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0019249b-f40e-4609-b592-f9fcc146c80a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb6f2c7163352aca0e271bcb0b20322b3ac5af9c07044f0d2523de5bd207ba1,PodSandboxId:346d7ee8b9728fdb09d0b5fb01ac8c855b8a4ae22b2702c9a4b14f42f894a970,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:
6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762695002979098849,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59b766d9-adb7-4e0c-bc68-9d9cf8fbbdba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdc72298c5067f27022c29a3fba2313bef279aa1262921b54b2abfeb1bcfd05,PodSandboxId:f734f4ea6404b3851d49fa04e9e7d50d1b5a1dd1811142e147a667e58d9e5c61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b19
0596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762694992805161180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xtt8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c0e27e8-3047-4a17-9435-f9185e872696,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d0daf4cf92a3d52b6f1d92f9a1bfeeac5d7a7f31da83e4fb7df39268b3e4ea5,PodSandboxId:28544be4ccc8d274ee0f1ce693a4962e7eb2818d5bbf182e93a9e060de954441,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762694991896517056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8hbf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97813667-ffbc-4b8a-a122-3fa531d57ee3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1939a4061bbfb63f842de0036168e14c4e2c4e170bc480e3de75bc49ea789293,PodSandboxId:8461abff35ed3e424ad88b8d770b8fd6f1d6ca8979d38ab40c0900b1f9daa2aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762694978374008451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f2f0cef7cfff2538acb5ffb3152000c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f516d00cd42560c02c8218477c1f740c22bdc3e04d80a67197f40634f93e478e,PodSandboxId:9d5b3d3ae012e6b1f760709c59c955b1738d0410b8a917c420fa67cd1ad2af07,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762694978316917120,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50977dcfe4ea6e6a61a3e7cf80dace1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381
,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5312ba3c9de673540c31b087b548f07f9144cbe046b292ca5260fa3a2418c7,PodSandboxId:8cb548decbe810fcc34c2b7b41b0518f383ba44d30236dad818bfd86a008c2a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762694978350105060,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4ac15728fbe3a146e056bd33fb08144,},Annotations:map[string]string{io.kubern
etes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5f31d63b316b6b4fceaf2cd37baa19b18c8dd0332189d49ddc503733e9f8479,PodSandboxId:82cda88284e7044d38c14febb7d1ee51977ecbd36b41815ac224232e7fb2f5bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762694978329597383,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-640912,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 40db61fb06568701553ada1b7a8540a0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5daba831-e73d-4910-b41d-be7da103456f name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:40:17 addons-640912 crio[808]: time="2025-11-09 13:40:17.513235638Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3c35466c-9eac-4001-a4d4-2bf60f04c9e4 name=/runtime.v1.RuntimeService/Version
	Nov 09 13:40:17 addons-640912 crio[808]: time="2025-11-09 13:40:17.513316789Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3c35466c-9eac-4001-a4d4-2bf60f04c9e4 name=/runtime.v1.RuntimeService/Version
	Nov 09 13:40:17 addons-640912 crio[808]: time="2025-11-09 13:40:17.515562295Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=85ba59d6-3c44-46ff-a410-d66eeff6169e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:40:17 addons-640912 crio[808]: time="2025-11-09 13:40:17.517362340Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762695617517326686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:510745,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=85ba59d6-3c44-46ff-a410-d66eeff6169e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:40:17 addons-640912 crio[808]: time="2025-11-09 13:40:17.518206877Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=86810cca-8853-4469-b65d-73df150442ad name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:40:17 addons-640912 crio[808]: time="2025-11-09 13:40:17.518461889Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=86810cca-8853-4469-b65d-73df150442ad name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:40:17 addons-640912 crio[808]: time="2025-11-09 13:40:17.518994645Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eacf871a61d3462bce510b8a7be4d66a24fdc0a53616b1f2526147ab839b5856,PodSandboxId:33d3042607b18515964bf61cd134e3070fa66eff00570ae2814c08265872d03b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1762695090213216833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 495cc12a-d51f-43be-a567-96a5b4fad03a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7f113792ee1aeacca2bc95d2857de71fa6874b3ee951659efa28320a524532,PodSandboxId:1b24a0719053d6cafc5b1179a4660b7383b3ad3fee86821846956848a958c4f6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1762695078417836968,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-8j7xf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 94b07f23-7caa-4ac1-8abc-174660a2f7a4,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5a862686cb4d2c0e54448fa8a8311708c3ae04778fab3490fa67801913b4a055,PodSandboxId:00a46d438634f1c852fdbf4f05d1491e38e2c51de289efbd8dd87e8acae44088,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a
7bf2,State:CONTAINER_EXITED,CreatedAt:1762695057053274602,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7kdd8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9535a584-09d0-470c-bdca-f8b70a29fe14,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e9e9c4d34d47e7d5a67e92ca9d8eb33ab0fd0ca3ffc05476e8c6342d0a0e7e,PodSandboxId:a3f82e39fba77bdb98d98684962717a0ca32dec2da0c643b2022e7041b41dcd1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba
112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1762695054467740038,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-kj7f9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a16c39e-0afd-467d-9a68-c565ad3f14d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69fe297c1b50ac2bcfff0e0f2e8357c77f8446eda32f448474242c45e7beef16,PodSandboxId:57ab048400abbbc313169bbbd7ac0890f37bd48587fbee368b27c41aff728264,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1762695034556022840,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72b9e2-abd1-49dd-b3cb-155aafc6e442,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfab18621429e6c41f473ab17e6f86a03dc38ea44f3a31f18af4b9bbfc4c4874,PodSandboxId:4d885cc41b56cb4c4a8b9e39ac9027f00f4b68b9e2c742ea8006768f27f47e22,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha25
6:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1762695014619786271,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-2tv7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0019249b-f40e-4609-b592-f9fcc146c80a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb6f2c7163352aca0e271bcb0b20322b3ac5af9c07044f0d2523de5bd207ba1,PodSandboxId:346d7ee8b9728fdb09d0b5fb01ac8c855b8a4ae22b2702c9a4b14f42f894a970,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:
6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762695002979098849,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59b766d9-adb7-4e0c-bc68-9d9cf8fbbdba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdc72298c5067f27022c29a3fba2313bef279aa1262921b54b2abfeb1bcfd05,PodSandboxId:f734f4ea6404b3851d49fa04e9e7d50d1b5a1dd1811142e147a667e58d9e5c61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b19
0596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762694992805161180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xtt8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c0e27e8-3047-4a17-9435-f9185e872696,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d0daf4cf92a3d52b6f1d92f9a1bfeeac5d7a7f31da83e4fb7df39268b3e4ea5,PodSandboxId:28544be4ccc8d274ee0f1ce693a4962e7eb2818d5bbf182e93a9e060de954441,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762694991896517056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8hbf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97813667-ffbc-4b8a-a122-3fa531d57ee3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1939a4061bbfb63f842de0036168e14c4e2c4e170bc480e3de75bc49ea789293,PodSandboxId:8461abff35ed3e424ad88b8d770b8fd6f1d6ca8979d38ab40c0900b1f9daa2aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762694978374008451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f2f0cef7cfff2538acb5ffb3152000c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f516d00cd42560c02c8218477c1f740c22bdc3e04d80a67197f40634f93e478e,PodSandboxId:9d5b3d3ae012e6b1f760709c59c955b1738d0410b8a917c420fa67cd1ad2af07,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762694978316917120,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50977dcfe4ea6e6a61a3e7cf80dace1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381
,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5312ba3c9de673540c31b087b548f07f9144cbe046b292ca5260fa3a2418c7,PodSandboxId:8cb548decbe810fcc34c2b7b41b0518f383ba44d30236dad818bfd86a008c2a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762694978350105060,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4ac15728fbe3a146e056bd33fb08144,},Annotations:map[string]string{io.kubern
etes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5f31d63b316b6b4fceaf2cd37baa19b18c8dd0332189d49ddc503733e9f8479,PodSandboxId:82cda88284e7044d38c14febb7d1ee51977ecbd36b41815ac224232e7fb2f5bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762694978329597383,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-640912,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 40db61fb06568701553ada1b7a8540a0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=86810cca-8853-4469-b65d-73df150442ad name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	eacf871a61d34       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          8 minutes ago       Running             busybox                   0                   33d3042607b18       busybox
	6c7f113792ee1       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             8 minutes ago       Running             controller                0                   1b24a0719053d       ingress-nginx-controller-675c5ddd98-8j7xf
	5a862686cb4d2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   9 minutes ago       Exited              patch                     0                   00a46d438634f       ingress-nginx-admission-patch-7kdd8
	52e9e9c4d34d4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   9 minutes ago       Exited              create                    0                   a3f82e39fba77       ingress-nginx-admission-create-kj7f9
	69fe297c1b50a       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               9 minutes ago       Running             minikube-ingress-dns      0                   57ab048400abb       kube-ingress-dns-minikube
	cfab18621429e       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     10 minutes ago      Running             amd-gpu-device-plugin     0                   4d885cc41b56c       amd-gpu-device-plugin-2tv7p
	1bb6f2c716335       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             10 minutes ago      Running             storage-provisioner       0                   346d7ee8b9728       storage-provisioner
	ecdc72298c506       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             10 minutes ago      Running             coredns                   0                   f734f4ea6404b       coredns-66bc5c9577-xtt8z
	4d0daf4cf92a3       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             10 minutes ago      Running             kube-proxy                0                   28544be4ccc8d       kube-proxy-8hbf4
	1939a4061bbfb       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             10 minutes ago      Running             kube-controller-manager   0                   8461abff35ed3       kube-controller-manager-addons-640912
	7a5312ba3c9de       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             10 minutes ago      Running             kube-scheduler            0                   8cb548decbe81       kube-scheduler-addons-640912
	b5f31d63b316b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             10 minutes ago      Running             kube-apiserver            0                   82cda88284e70       kube-apiserver-addons-640912
	f516d00cd4256       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             10 minutes ago      Running             etcd                      0                   9d5b3d3ae012e       etcd-addons-640912
	
	
	==> coredns [ecdc72298c5067f27022c29a3fba2313bef279aa1262921b54b2abfeb1bcfd05] <==
	[INFO] 10.244.0.8:56749 - 57800 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000167846s
	[INFO] 10.244.0.8:56749 - 7634 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000234715s
	[INFO] 10.244.0.8:56749 - 64775 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000214438s
	[INFO] 10.244.0.8:56749 - 27735 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000222714s
	[INFO] 10.244.0.8:56749 - 4667 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000235027s
	[INFO] 10.244.0.8:56749 - 32956 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000240757s
	[INFO] 10.244.0.8:56749 - 59149 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.001351156s
	[INFO] 10.244.0.8:47223 - 42964 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000217411s
	[INFO] 10.244.0.8:47223 - 43270 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.001085953s
	[INFO] 10.244.0.8:60054 - 39280 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000101116s
	[INFO] 10.244.0.8:60054 - 39607 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000284859s
	[INFO] 10.244.0.8:45885 - 39288 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0001299s
	[INFO] 10.244.0.8:45885 - 39507 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000087106s
	[INFO] 10.244.0.8:33022 - 41004 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000143608s
	[INFO] 10.244.0.8:33022 - 41467 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000090986s
	[INFO] 10.244.0.23:41923 - 2129 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000901948s
	[INFO] 10.244.0.23:37925 - 19699 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000411247s
	[INFO] 10.244.0.23:56154 - 55757 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000225694s
	[INFO] 10.244.0.23:55144 - 14584 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000195303s
	[INFO] 10.244.0.23:43131 - 45070 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000319047s
	[INFO] 10.244.0.23:59696 - 23369 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.002225751s
	[INFO] 10.244.0.23:45065 - 55293 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001506203s
	[INFO] 10.244.0.23:47314 - 7537 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.005372558s
	[INFO] 10.244.0.28:41385 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.003092641s
	[INFO] 10.244.0.28:50820 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.001765974s
	
	
	==> describe nodes <==
	Name:               addons-640912
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-640912
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=addons-640912
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T13_29_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-640912
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:29:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-640912
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 13:40:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 13:39:07 +0000   Sun, 09 Nov 2025 13:29:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 13:39:07 +0000   Sun, 09 Nov 2025 13:29:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 13:39:07 +0000   Sun, 09 Nov 2025 13:29:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 13:39:07 +0000   Sun, 09 Nov 2025 13:29:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.228
	  Hostname:    addons-640912
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 50c2653cdfbf41f9bef0624b1a679070
	  System UUID:                50c2653c-dfbf-41f9-bef0-624b1a679070
	  Boot ID:                    92fab23c-5b35-498d-b1ae-dc16572c1ced
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m50s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m3s
	  default                     test-local-path                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m31s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-8j7xf    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 amd-gpu-device-plugin-2tv7p                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-xtt8z                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 etcd-addons-640912                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-640912                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-640912        200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-8hbf4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-640912                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 10m   kube-proxy       
	  Normal  Starting                 10m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m   kubelet          Node addons-640912 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m   kubelet          Node addons-640912 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m   kubelet          Node addons-640912 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m   kubelet          Node addons-640912 status is now: NodeReady
	  Normal  RegisteredNode           10m   node-controller  Node addons-640912 event: Registered Node addons-640912 in Controller
	
	
	==> dmesg <==
	[Nov 9 13:30] kauditd_printk_skb: 123 callbacks suppressed
	[  +2.597925] kauditd_printk_skb: 235 callbacks suppressed
	[  +0.573763] kauditd_printk_skb: 410 callbacks suppressed
	[  +9.105607] kauditd_printk_skb: 35 callbacks suppressed
	[  +9.999909] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.891357] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.415789] kauditd_printk_skb: 122 callbacks suppressed
	[  +4.010962] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.104992] kauditd_printk_skb: 59 callbacks suppressed
	[Nov 9 13:31] kauditd_printk_skb: 87 callbacks suppressed
	[  +2.729784] kauditd_printk_skb: 111 callbacks suppressed
	[  +0.054539] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.206294] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.614998] kauditd_printk_skb: 32 callbacks suppressed
	[  +2.051708] kauditd_printk_skb: 47 callbacks suppressed
	[  +9.781817] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.000042] kauditd_printk_skb: 22 callbacks suppressed
	[  +3.743671] kauditd_printk_skb: 109 callbacks suppressed
	[  +3.183523] kauditd_printk_skb: 109 callbacks suppressed
	[Nov 9 13:32] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.000937] kauditd_printk_skb: 30 callbacks suppressed
	[ +12.098567] kauditd_printk_skb: 94 callbacks suppressed
	[  +6.595281] kauditd_printk_skb: 80 callbacks suppressed
	[Nov 9 13:33] kauditd_printk_skb: 15 callbacks suppressed
	[Nov 9 13:38] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [f516d00cd42560c02c8218477c1f740c22bdc3e04d80a67197f40634f93e478e] <==
	{"level":"info","ts":"2025-11-09T13:31:22.996146Z","caller":"traceutil/trace.go:172","msg":"trace[302697611] linearizableReadLoop","detail":"{readStateIndex:1194; appliedIndex:1194; }","duration":"165.040081ms","start":"2025-11-09T13:31:22.831088Z","end":"2025-11-09T13:31:22.996128Z","steps":["trace[302697611] 'read index received'  (duration: 165.034114ms)","trace[302697611] 'applied index is now lower than readState.Index'  (duration: 5.157µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T13:31:22.996293Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"165.199351ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T13:31:22.996314Z","caller":"traceutil/trace.go:172","msg":"trace[1678018842] range","detail":"{range_begin:/registry/poddisruptionbudgets; range_end:; response_count:0; response_revision:1160; }","duration":"165.253621ms","start":"2025-11-09T13:31:22.831055Z","end":"2025-11-09T13:31:22.996309Z","steps":["trace[1678018842] 'agreement among raft nodes before linearized reading'  (duration: 165.171034ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T13:31:22.997662Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.922771ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T13:31:22.998744Z","caller":"traceutil/trace.go:172","msg":"trace[1858999846] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1161; }","duration":"103.012616ms","start":"2025-11-09T13:31:22.895717Z","end":"2025-11-09T13:31:22.998730Z","steps":["trace[1858999846] 'agreement among raft nodes before linearized reading'  (duration: 101.899265ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T13:31:22.999100Z","caller":"traceutil/trace.go:172","msg":"trace[1451434482] transaction","detail":"{read_only:false; response_revision:1161; number_of_response:1; }","duration":"249.478691ms","start":"2025-11-09T13:31:22.749609Z","end":"2025-11-09T13:31:22.999088Z","steps":["trace[1451434482] 'process raft request'  (duration: 247.857862ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T13:31:52.754938Z","caller":"traceutil/trace.go:172","msg":"trace[6026568] linearizableReadLoop","detail":"{readStateIndex:1397; appliedIndex:1397; }","duration":"236.117273ms","start":"2025-11-09T13:31:52.518730Z","end":"2025-11-09T13:31:52.754847Z","steps":["trace[6026568] 'read index received'  (duration: 236.112503ms)","trace[6026568] 'applied index is now lower than readState.Index'  (duration: 4.061µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T13:31:52.755188Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"236.415585ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2025-11-09T13:31:52.755257Z","caller":"traceutil/trace.go:172","msg":"trace[6914757] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1354; }","duration":"236.519277ms","start":"2025-11-09T13:31:52.518725Z","end":"2025-11-09T13:31:52.755244Z","steps":["trace[6914757] 'agreement among raft nodes before linearized reading'  (duration: 236.32921ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T13:31:52.755661Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"185.569325ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" limit:1 ","response":"range_response_count:1 size:3395"}
	{"level":"info","ts":"2025-11-09T13:31:52.755687Z","caller":"traceutil/trace.go:172","msg":"trace[1620442481] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:1; response_revision:1355; }","duration":"185.600716ms","start":"2025-11-09T13:31:52.570080Z","end":"2025-11-09T13:31:52.755681Z","steps":["trace[1620442481] 'agreement among raft nodes before linearized reading'  (duration: 185.518604ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T13:31:52.755923Z","caller":"traceutil/trace.go:172","msg":"trace[1200344183] transaction","detail":"{read_only:false; response_revision:1355; number_of_response:1; }","duration":"304.583393ms","start":"2025-11-09T13:31:52.451331Z","end":"2025-11-09T13:31:52.755915Z","steps":["trace[1200344183] 'process raft request'  (duration: 304.178309ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T13:31:52.756031Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-09T13:31:52.451310Z","time spent":"304.631939ms","remote":"127.0.0.1:58684","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1343 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-11-09T13:31:55.033981Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"136.520258ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T13:31:55.034081Z","caller":"traceutil/trace.go:172","msg":"trace[553597333] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1365; }","duration":"136.623206ms","start":"2025-11-09T13:31:54.897438Z","end":"2025-11-09T13:31:55.034062Z","steps":["trace[553597333] 'range keys from in-memory index tree'  (duration: 136.438838ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T13:32:01.051010Z","caller":"traceutil/trace.go:172","msg":"trace[427081115] linearizableReadLoop","detail":"{readStateIndex:1451; appliedIndex:1451; }","duration":"321.984641ms","start":"2025-11-09T13:32:00.728995Z","end":"2025-11-09T13:32:01.050980Z","steps":["trace[427081115] 'read index received'  (duration: 321.978499ms)","trace[427081115] 'applied index is now lower than readState.Index'  (duration: 5.245µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T13:32:01.051205Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"322.326861ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T13:32:01.051230Z","caller":"traceutil/trace.go:172","msg":"trace[33595075] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1404; }","duration":"322.375104ms","start":"2025-11-09T13:32:00.728848Z","end":"2025-11-09T13:32:01.051224Z","steps":["trace[33595075] 'agreement among raft nodes before linearized reading'  (duration: 322.303091ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T13:32:01.052190Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.405283ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-11-09T13:32:01.052402Z","caller":"traceutil/trace.go:172","msg":"trace[1969419880] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1405; }","duration":"217.720748ms","start":"2025-11-09T13:32:00.834666Z","end":"2025-11-09T13:32:01.052387Z","steps":["trace[1969419880] 'agreement among raft nodes before linearized reading'  (duration: 217.090716ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T13:32:01.052589Z","caller":"traceutil/trace.go:172","msg":"trace[1054216716] transaction","detail":"{read_only:false; response_revision:1405; number_of_response:1; }","duration":"365.515044ms","start":"2025-11-09T13:32:00.687065Z","end":"2025-11-09T13:32:01.052580Z","steps":["trace[1054216716] 'process raft request'  (duration: 364.182623ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T13:32:01.052693Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-09T13:32:00.687045Z","time spent":"365.59912ms","remote":"127.0.0.1:58726","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3708,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/default/cloud-spanner-emulator-6f9fcf858b-v5gk4\" mod_revision:1404 > success:<request_put:<key:\"/registry/pods/default/cloud-spanner-emulator-6f9fcf858b-v5gk4\" value_size:3638 >> failure:<request_range:<key:\"/registry/pods/default/cloud-spanner-emulator-6f9fcf858b-v5gk4\" > >"}
	{"level":"info","ts":"2025-11-09T13:39:39.373456Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1828}
	{"level":"info","ts":"2025-11-09T13:39:39.460700Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1828,"took":"85.70034ms","hash":640628007,"current-db-size-bytes":6189056,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":3977216,"current-db-size-in-use":"4.0 MB"}
	{"level":"info","ts":"2025-11-09T13:39:39.460805Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":640628007,"revision":1828,"compact-revision":-1}
	
	
	==> kernel <==
	 13:40:17 up 11 min,  0 users,  load average: 0.54, 0.81, 0.74
	Linux addons-640912 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Nov  1 20:49:51 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [b5f31d63b316b6b4fceaf2cd37baa19b18c8dd0332189d49ddc503733e9f8479] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1109 13:30:52.832035       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.99.89:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.99.89:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1109 13:30:52.900702       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1109 13:30:52.919741       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1109 13:31:36.567406       1 conn.go:339] Error on socket receive: read tcp 192.168.39.228:8443->192.168.39.1:33552: use of closed network connection
	I1109 13:31:47.188509       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.149.219"}
	I1109 13:32:15.730439       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1109 13:32:15.992589       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.68.202"}
	I1109 13:32:53.848728       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1109 13:38:17.576370       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1109 13:38:17.578434       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1109 13:38:17.623087       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1109 13:38:17.623227       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1109 13:38:17.645148       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1109 13:38:17.645281       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1109 13:38:17.682337       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1109 13:38:17.682474       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1109 13:38:17.737378       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1109 13:38:17.737524       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1109 13:38:18.646472       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1109 13:38:18.737577       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1109 13:38:18.755498       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E1109 13:38:19.173120       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	I1109 13:39:41.623768       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [1939a4061bbfb63f842de0036168e14c4e2c4e170bc480e3de75bc49ea789293] <==
	E1109 13:38:27.978237       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1109 13:38:27.979941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1109 13:38:28.310415       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1109 13:38:28.311753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1109 13:38:34.826742       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1109 13:38:34.828362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1109 13:38:37.337426       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1109 13:38:37.339142       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1109 13:38:39.374344       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1109 13:38:39.375773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1109 13:38:41.187231       1 reconciler.go:364] "attacherDetacher.AttachVolume started" logger="persistentvolume-attach-detach-controller" volumeName="kubernetes.io/csi/hostpath.csi.k8s.io^806f848d-bd70-11f0-9a88-1ef5cbd621ce" nodeName="addons-640912" scheduledPods=["default/task-pv-pod"]
	E1109 13:38:51.219622       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1109 13:38:51.221388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1109 13:38:54.648521       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1109 13:38:54.650287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1109 13:38:58.028170       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1109 13:38:58.031501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1109 13:39:26.028722       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1109 13:39:26.030276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1109 13:39:27.679362       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1109 13:39:27.680810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1109 13:39:40.730404       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1109 13:39:40.731750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1109 13:40:01.306121       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1109 13:40:01.307375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [4d0daf4cf92a3d52b6f1d92f9a1bfeeac5d7a7f31da83e4fb7df39268b3e4ea5] <==
	I1109 13:29:52.980421       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 13:29:53.082837       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 13:29:53.086021       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.228"]
	E1109 13:29:53.086130       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 13:29:53.751653       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1109 13:29:53.751799       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1109 13:29:53.751834       1 server_linux.go:132] "Using iptables Proxier"
	I1109 13:29:53.834205       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 13:29:53.836618       1 server.go:527] "Version info" version="v1.34.1"
	I1109 13:29:53.836664       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 13:29:53.846430       1 config.go:200] "Starting service config controller"
	I1109 13:29:53.846481       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 13:29:53.846506       1 config.go:106] "Starting endpoint slice config controller"
	I1109 13:29:53.846510       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 13:29:53.846520       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 13:29:53.846523       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 13:29:53.874452       1 config.go:309] "Starting node config controller"
	I1109 13:29:53.874500       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 13:29:53.874508       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 13:29:53.947795       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 13:29:53.947900       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 13:29:53.947945       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7a5312ba3c9de673540c31b087b548f07f9144cbe046b292ca5260fa3a2418c7] <==
	E1109 13:29:41.646013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 13:29:41.646142       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 13:29:41.646741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 13:29:41.647690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 13:29:41.647977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 13:29:41.648034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 13:29:42.450718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 13:29:42.531090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1109 13:29:42.551808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 13:29:42.573030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1109 13:29:42.613834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1109 13:29:42.617089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1109 13:29:42.636745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 13:29:42.745262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1109 13:29:42.747084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 13:29:42.809366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 13:29:42.869592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 13:29:42.934044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 13:29:42.941621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1109 13:29:42.985001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 13:29:43.033735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 13:29:43.088695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 13:29:43.123724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1109 13:29:43.146070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1109 13:29:44.634226       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 13:39:23 addons-640912 kubelet[1496]: E1109 13:39:23.080837    1496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="e7006701-5d88-4365-b100-377ce22b89cc"
	Nov 09 13:39:25 addons-640912 kubelet[1496]: E1109 13:39:25.707712    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762695565706807786  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:39:25 addons-640912 kubelet[1496]: E1109 13:39:25.707752    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762695565706807786  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:39:27 addons-640912 kubelet[1496]: E1109 13:39:27.083041    1496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="d25ea36b-0cab-4e93-a461-46fc1de68cdc"
	Nov 09 13:39:29 addons-640912 kubelet[1496]: I1109 13:39:29.079250    1496 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 09 13:39:35 addons-640912 kubelet[1496]: E1109 13:39:35.080696    1496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="e7006701-5d88-4365-b100-377ce22b89cc"
	Nov 09 13:39:35 addons-640912 kubelet[1496]: E1109 13:39:35.712088    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762695575711097150  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:39:35 addons-640912 kubelet[1496]: E1109 13:39:35.712122    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762695575711097150  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:39:38 addons-640912 kubelet[1496]: E1109 13:39:38.080939    1496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="d25ea36b-0cab-4e93-a461-46fc1de68cdc"
	Nov 09 13:39:45 addons-640912 kubelet[1496]: E1109 13:39:45.716493    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762695585715346849  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:39:45 addons-640912 kubelet[1496]: E1109 13:39:45.716561    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762695585715346849  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:39:48 addons-640912 kubelet[1496]: E1109 13:39:48.080003    1496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="e7006701-5d88-4365-b100-377ce22b89cc"
	Nov 09 13:39:53 addons-640912 kubelet[1496]: E1109 13:39:53.085563    1496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="d25ea36b-0cab-4e93-a461-46fc1de68cdc"
	Nov 09 13:39:55 addons-640912 kubelet[1496]: E1109 13:39:55.720589    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762695595719703295  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:39:55 addons-640912 kubelet[1496]: E1109 13:39:55.721101    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762695595719703295  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:40:02 addons-640912 kubelet[1496]: E1109 13:40:02.193531    1496 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Nov 09 13:40:02 addons-640912 kubelet[1496]: E1109 13:40:02.193639    1496 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Nov 09 13:40:02 addons-640912 kubelet[1496]: E1109 13:40:02.194024    1496 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod test-local-path_default(24715673-6be0-4489-8fb3-064bda4b15c9): ErrImagePull: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 09 13:40:02 addons-640912 kubelet[1496]: E1109 13:40:02.194080    1496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="24715673-6be0-4489-8fb3-064bda4b15c9"
	Nov 09 13:40:05 addons-640912 kubelet[1496]: E1109 13:40:05.081405    1496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="d25ea36b-0cab-4e93-a461-46fc1de68cdc"
	Nov 09 13:40:05 addons-640912 kubelet[1496]: E1109 13:40:05.724453    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762695605723822062  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:40:05 addons-640912 kubelet[1496]: E1109 13:40:05.724517    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762695605723822062  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:40:14 addons-640912 kubelet[1496]: E1109 13:40:14.081534    1496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="24715673-6be0-4489-8fb3-064bda4b15c9"
	Nov 09 13:40:15 addons-640912 kubelet[1496]: E1109 13:40:15.729415    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762695615728849664  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:40:15 addons-640912 kubelet[1496]: E1109 13:40:15.729439    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762695615728849664  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	
	
	==> storage-provisioner [1bb6f2c7163352aca0e271bcb0b20322b3ac5af9c07044f0d2523de5bd207ba1] <==
	W1109 13:39:52.455495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:39:54.459932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:39:54.467043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:39:56.472067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:39:56.482161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:39:58.487515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:39:58.495289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:40:00.499315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:40:00.509630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:40:02.514070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:40:02.523653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:40:04.529154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:40:04.536507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:40:06.540556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:40:06.547970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:40:08.552424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:40:08.562109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:40:10.566954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:40:10.574126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:40:12.579077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:40:12.588584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:40:14.593772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:40:14.603162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:40:16.609960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:40:16.618388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-640912 -n addons-640912
helpers_test.go:269: (dbg) Run:  kubectl --context addons-640912 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-kj7f9 ingress-nginx-admission-patch-7kdd8
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-640912 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-kj7f9 ingress-nginx-admission-patch-7kdd8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-640912 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-kj7f9 ingress-nginx-admission-patch-7kdd8: exit status 1 (128.216613ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-640912/192.168.39.228
	Start Time:       Sun, 09 Nov 2025 13:32:15 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nxkzm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nxkzm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  8m3s                 default-scheduler  Successfully assigned default/nginx to addons-640912
	  Normal   Pulling    116s (x4 over 8m2s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     64s (x4 over 6m50s)  kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     64s (x4 over 6m50s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    13s (x9 over 6m49s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     13s (x9 over 6m49s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-640912/192.168.39.228
	Start Time:       Sun, 09 Nov 2025 13:32:14 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bmmc7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-bmmc7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  8m4s                  default-scheduler  Successfully assigned default/task-pv-pod to addons-640912
	  Warning  Failed     109s (x4 over 7m20s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     109s (x4 over 7m20s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    30s (x11 over 7m19s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     30s (x11 over 7m19s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    17s (x5 over 8m4s)    kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-640912/192.168.39.228
	Start Time:       Sun, 09 Nov 2025 13:31:52 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sgzjt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-sgzjt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  8m26s                default-scheduler  Successfully assigned default/test-local-path to addons-640912
	  Warning  Failed     7m52s                kubelet            Failed to pull image "busybox:stable": fetching target platform image selected from image index: reading manifest sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    46s (x5 over 8m22s)  kubelet            Pulling image "busybox:stable"
	  Warning  Failed     16s (x5 over 7m52s)  kubelet            Error: ErrImagePull
	  Warning  Failed     16s (x4 over 6m20s)  kubelet            Failed to pull image "busybox:stable": reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    4s (x12 over 7m52s)  kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     4s (x12 over 7m52s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-kj7f9" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-7kdd8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-640912 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-kj7f9 ingress-nginx-admission-patch-7kdd8: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-640912 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-640912 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-640912 addons disable ingress --alsologtostderr -v=1: (7.992264634s)
--- FAIL: TestAddons/parallel/Ingress (492.48s)

                                                
                                    
x
+
TestAddons/parallel/CSI (373.86s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1109 13:32:11.666988  553473 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1109 13:32:11.678950  553473 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1109 13:32:11.678985  553473 kapi.go:107] duration metric: took 12.051502ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 12.062459ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-640912 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640912 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640912 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640912 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-640912 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [e7006701-5d88-4365-b100-377ce22b89cc] Pending
helpers_test.go:352: "task-pv-pod" [e7006701-5d88-4365-b100-377ce22b89cc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:337: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-640912 -n addons-640912
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-11-09 13:38:14.324881193 +0000 UTC m=+565.047355579
addons_test.go:567: (dbg) Run:  kubectl --context addons-640912 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-640912 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-640912/192.168.39.228
Start Time:       Sun, 09 Nov 2025 13:32:14 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.29
IPs:
IP:  10.244.0.29
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP (http-server)
Host Port:      0/TCP (http-server)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bmmc7 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-bmmc7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  6m                    default-scheduler  Successfully assigned default/task-pv-pod to addons-640912
Warning  Failed     105s (x3 over 5m16s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     105s (x3 over 5m16s)  kubelet            Error: ErrImagePull
Normal   BackOff    65s (x5 over 5m15s)   kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     65s (x5 over 5m15s)   kubelet            Error: ImagePullBackOff
Normal   Pulling    53s (x4 over 6m)      kubelet            Pulling image "docker.io/nginx"
addons_test.go:567: (dbg) Run:  kubectl --context addons-640912 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-640912 logs task-pv-pod -n default: exit status 1 (86.128463ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-640912 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-640912 -n addons-640912
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-640912 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-640912 logs -n 25: (1.668283459s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-969818                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-969818 │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │ 09 Nov 25 13:28 UTC │
	│ start   │ -o=json --download-only -p download-only-045678 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-045678 │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ delete  │ -p download-only-045678                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-045678 │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ delete  │ -p download-only-969818                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-969818 │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ delete  │ -p download-only-045678                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-045678 │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ start   │ --download-only -p binary-mirror-045777 --alsologtostderr --binary-mirror http://127.0.0.1:41935 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-045777 │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │                     │
	│ delete  │ -p binary-mirror-045777                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-045777 │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ addons  │ enable dashboard -p addons-640912                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │                     │
	│ addons  │ disable dashboard -p addons-640912                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │                     │
	│ start   │ -p addons-640912 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:31 UTC │
	│ addons  │ addons-640912 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │ 09 Nov 25 13:31 UTC │
	│ addons  │ addons-640912 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │ 09 Nov 25 13:31 UTC │
	│ addons  │ enable headlamp -p addons-640912 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │ 09 Nov 25 13:31 UTC │
	│ addons  │ addons-640912 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │ 09 Nov 25 13:31 UTC │
	│ addons  │ addons-640912 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │ 09 Nov 25 13:31 UTC │
	│ addons  │ addons-640912 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ addons  │ addons-640912 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ addons  │ addons-640912 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ ip      │ addons-640912 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ addons  │ addons-640912 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ addons  │ addons-640912 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-640912                                                                                                                                                                                                                                                                                                                                                                                         │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ addons  │ addons-640912 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ addons  │ addons-640912 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:34 UTC │ 09 Nov 25 13:35 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 13:29:01
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 13:29:01.529521  554049 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:29:01.529783  554049 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:29:01.529806  554049 out.go:374] Setting ErrFile to fd 2...
	I1109 13:29:01.529811  554049 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:29:01.530042  554049 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
	I1109 13:29:01.530619  554049 out.go:368] Setting JSON to false
	I1109 13:29:01.531597  554049 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":69091,"bootTime":1762625851,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 13:29:01.531713  554049 start.go:143] virtualization: kvm guest
	I1109 13:29:01.533875  554049 out.go:179] * [addons-640912] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 13:29:01.535675  554049 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 13:29:01.535668  554049 notify.go:221] Checking for updates...
	I1109 13:29:01.538124  554049 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:29:01.539382  554049 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-549598/kubeconfig
	I1109 13:29:01.540720  554049 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 13:29:01.542038  554049 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 13:29:01.543437  554049 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 13:29:01.545291  554049 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:29:01.580555  554049 out.go:179] * Using the kvm2 driver based on user configuration
	I1109 13:29:01.581955  554049 start.go:309] selected driver: kvm2
	I1109 13:29:01.581991  554049 start.go:930] validating driver "kvm2" against <nil>
	I1109 13:29:01.582008  554049 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 13:29:01.582854  554049 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 13:29:01.583161  554049 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 13:29:01.583199  554049 cni.go:84] Creating CNI manager for ""
	I1109 13:29:01.583249  554049 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1109 13:29:01.583262  554049 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1109 13:29:01.583305  554049 start.go:353] cluster config:
	{Name:addons-640912 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-640912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1109 13:29:01.583400  554049 iso.go:125] acquiring lock: {Name:mk8f0547b4600c0b2d1e831f024251df19b55199 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 13:29:01.585006  554049 out.go:179] * Starting "addons-640912" primary control-plane node in "addons-640912" cluster
	I1109 13:29:01.586291  554049 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:29:01.586344  554049 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-549598/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1109 13:29:01.586355  554049 cache.go:65] Caching tarball of preloaded images
	I1109 13:29:01.586504  554049 preload.go:238] Found /home/jenkins/minikube-integration/21139-549598/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1109 13:29:01.586520  554049 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 13:29:01.586929  554049 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/config.json ...
	I1109 13:29:01.586963  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/config.json: {Name:mk64beb99f02d72e356fa001c0aedbf8dde60a9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:01.587175  554049 start.go:360] acquireMachinesLock for addons-640912: {Name:mkb2a0b4add9a99b18ce9ab72b74eb5b4fda0e0a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1109 13:29:01.587250  554049 start.go:364] duration metric: took 54.118µs to acquireMachinesLock for "addons-640912"
	I1109 13:29:01.587279  554049 start.go:93] Provisioning new machine with config: &{Name:addons-640912 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-640912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 13:29:01.587339  554049 start.go:125] createHost starting for "" (driver="kvm2")
	I1109 13:29:01.588964  554049 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1109 13:29:01.589196  554049 start.go:159] libmachine.API.Create for "addons-640912" (driver="kvm2")
	I1109 13:29:01.589238  554049 client.go:173] LocalClient.Create starting
	I1109 13:29:01.589385  554049 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem
	I1109 13:29:01.866031  554049 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem
	I1109 13:29:02.376066  554049 main.go:143] libmachine: creating domain...
	I1109 13:29:02.376091  554049 main.go:143] libmachine: creating network...
	I1109 13:29:02.377887  554049 main.go:143] libmachine: found existing default network
	I1109 13:29:02.378145  554049 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1109 13:29:02.378765  554049 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f10d50}
	I1109 13:29:02.378922  554049 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-640912</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1109 13:29:02.385552  554049 main.go:143] libmachine: creating private network mk-addons-640912 192.168.39.0/24...
	I1109 13:29:02.480263  554049 main.go:143] libmachine: private network mk-addons-640912 192.168.39.0/24 created
	I1109 13:29:02.480592  554049 main.go:143] libmachine: <network>
	  <name>mk-addons-640912</name>
	  <uuid>5093d52e-d83e-4496-8f74-950632b55811</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:c3:49:16'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1109 13:29:02.480645  554049 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912 ...
	I1109 13:29:02.480684  554049 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21139-549598/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso
	I1109 13:29:02.480700  554049 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 13:29:02.480786  554049 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21139-549598/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21139-549598/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso...
	I1109 13:29:02.790875  554049 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa...
	I1109 13:29:03.048683  554049 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/addons-640912.rawdisk...
	I1109 13:29:03.048760  554049 main.go:143] libmachine: Writing magic tar header
	I1109 13:29:03.048789  554049 main.go:143] libmachine: Writing SSH key tar header
	I1109 13:29:03.048939  554049 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912 ...
	I1109 13:29:03.049043  554049 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912
	I1109 13:29:03.049096  554049 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912 (perms=drwx------)
	I1109 13:29:03.049130  554049 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21139-549598/.minikube/machines
	I1109 13:29:03.049146  554049 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21139-549598/.minikube/machines (perms=drwxr-xr-x)
	I1109 13:29:03.049170  554049 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 13:29:03.049190  554049 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21139-549598/.minikube (perms=drwxr-xr-x)
	I1109 13:29:03.049210  554049 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21139-549598
	I1109 13:29:03.049226  554049 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21139-549598 (perms=drwxrwxr-x)
	I1109 13:29:03.049244  554049 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1109 13:29:03.049267  554049 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1109 13:29:03.049283  554049 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1109 13:29:03.049298  554049 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1109 13:29:03.049317  554049 main.go:143] libmachine: checking permissions on dir: /home
	I1109 13:29:03.049332  554049 main.go:143] libmachine: skipping /home - not owner
	I1109 13:29:03.049347  554049 main.go:143] libmachine: defining domain...
	I1109 13:29:03.051070  554049 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-640912</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/addons-640912.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-640912'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1109 13:29:03.059933  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:f8:20:b0 in network default
	I1109 13:29:03.060875  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:03.060908  554049 main.go:143] libmachine: starting domain...
	I1109 13:29:03.060914  554049 main.go:143] libmachine: ensuring networks are active...
	I1109 13:29:03.062198  554049 main.go:143] libmachine: Ensuring network default is active
	I1109 13:29:03.062950  554049 main.go:143] libmachine: Ensuring network mk-addons-640912 is active
	I1109 13:29:03.064049  554049 main.go:143] libmachine: getting domain XML...
	I1109 13:29:03.066087  554049 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-640912</name>
	  <uuid>50c2653c-dfbf-41f9-bef0-624b1a679070</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/addons-640912.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:3b:97:c4'/>
	      <source network='mk-addons-640912'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:f8:20:b0'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1109 13:29:04.518458  554049 main.go:143] libmachine: waiting for domain to start...
	I1109 13:29:04.520285  554049 main.go:143] libmachine: domain is now running
	I1109 13:29:04.520317  554049 main.go:143] libmachine: waiting for IP...
	I1109 13:29:04.521463  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:04.522572  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:04.522598  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:04.523028  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:04.523130  554049 retry.go:31] will retry after 248.555943ms: waiting for domain to come up
	I1109 13:29:04.773776  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:04.774727  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:04.774751  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:04.775169  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:04.775219  554049 retry.go:31] will retry after 253.374239ms: waiting for domain to come up
	I1109 13:29:05.030329  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:05.031648  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:05.031676  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:05.032301  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:05.032357  554049 retry.go:31] will retry after 460.991203ms: waiting for domain to come up
	I1109 13:29:05.495209  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:05.495935  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:05.495953  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:05.496394  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:05.496445  554049 retry.go:31] will retry after 488.671936ms: waiting for domain to come up
	I1109 13:29:05.987310  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:05.988315  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:05.988337  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:05.988678  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:05.988724  554049 retry.go:31] will retry after 734.270823ms: waiting for domain to come up
	I1109 13:29:06.724517  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:06.725451  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:06.725483  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:06.726091  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:06.726145  554049 retry.go:31] will retry after 813.958486ms: waiting for domain to come up
	I1109 13:29:07.541351  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:07.542188  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:07.542215  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:07.542584  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:07.542638  554049 retry.go:31] will retry after 773.028537ms: waiting for domain to come up
	I1109 13:29:08.317882  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:08.318758  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:08.318779  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:08.319182  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:08.319228  554049 retry.go:31] will retry after 902.625899ms: waiting for domain to come up
	I1109 13:29:09.223517  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:09.224270  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:09.224291  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:09.224645  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:09.224698  554049 retry.go:31] will retry after 1.447427193s: waiting for domain to come up
	I1109 13:29:10.674526  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:10.675369  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:10.675411  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:10.675832  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:10.675890  554049 retry.go:31] will retry after 1.413133453s: waiting for domain to come up
	I1109 13:29:12.090825  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:12.091679  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:12.091701  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:12.092074  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:12.092132  554049 retry.go:31] will retry after 1.812634142s: waiting for domain to come up
	I1109 13:29:13.907484  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:13.908470  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:13.908492  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:13.908953  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:13.909039  554049 retry.go:31] will retry after 3.291540475s: waiting for domain to come up
	I1109 13:29:17.202151  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:17.202984  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:17.203006  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:17.203397  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:17.203453  554049 retry.go:31] will retry after 4.480228837s: waiting for domain to come up
	I1109 13:29:21.685736  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:21.686518  554049 main.go:143] libmachine: domain addons-640912 has current primary IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:21.686537  554049 main.go:143] libmachine: found domain IP: 192.168.39.228
	I1109 13:29:21.686546  554049 main.go:143] libmachine: reserving static IP address...
	I1109 13:29:21.687020  554049 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-640912", mac: "52:54:00:3b:97:c4", ip: "192.168.39.228"} in network mk-addons-640912
	I1109 13:29:21.917975  554049 main.go:143] libmachine: reserved static IP address 192.168.39.228 for domain addons-640912
	I1109 13:29:21.918007  554049 main.go:143] libmachine: waiting for SSH...
	I1109 13:29:21.918016  554049 main.go:143] libmachine: Getting to WaitForSSH function...
	I1109 13:29:21.923685  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:21.924701  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:21.924754  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:21.925088  554049 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:21.925387  554049 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I1109 13:29:21.925408  554049 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1109 13:29:22.046606  554049 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 13:29:22.047386  554049 main.go:143] libmachine: domain creation complete
	I1109 13:29:22.050123  554049 machine.go:94] provisionDockerMachine start ...
	I1109 13:29:22.054988  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.055715  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:22.055765  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.056311  554049 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:22.056903  554049 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I1109 13:29:22.056974  554049 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 13:29:22.180617  554049 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1109 13:29:22.180660  554049 buildroot.go:166] provisioning hostname "addons-640912"
	I1109 13:29:22.186275  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.187117  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:22.187172  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.187501  554049 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:22.187787  554049 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I1109 13:29:22.187941  554049 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-640912 && echo "addons-640912" | sudo tee /etc/hostname
	I1109 13:29:22.341110  554049 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-640912
	
	I1109 13:29:22.345909  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.346909  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:22.346961  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.347366  554049 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:22.347633  554049 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I1109 13:29:22.347656  554049 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-640912' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-640912/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-640912' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 13:29:22.484440  554049 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 13:29:22.484470  554049 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21139-549598/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-549598/.minikube}
	I1109 13:29:22.484528  554049 buildroot.go:174] setting up certificates
	I1109 13:29:22.484547  554049 provision.go:84] configureAuth start
	I1109 13:29:22.488028  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.488482  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:22.488510  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.491209  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.491676  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:22.491713  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.491896  554049 provision.go:143] copyHostCerts
	I1109 13:29:22.492005  554049 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-549598/.minikube/key.pem (1679 bytes)
	I1109 13:29:22.492184  554049 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-549598/.minikube/ca.pem (1082 bytes)
	I1109 13:29:22.492340  554049 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-549598/.minikube/cert.pem (1123 bytes)
	I1109 13:29:22.492422  554049 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-549598/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca-key.pem org=jenkins.addons-640912 san=[127.0.0.1 192.168.39.228 addons-640912 localhost minikube]
	I1109 13:29:22.673233  554049 provision.go:177] copyRemoteCerts
	I1109 13:29:22.673315  554049 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 13:29:22.676789  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.677351  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:22.677382  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.677656  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:22.784762  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 13:29:22.825830  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1109 13:29:22.864504  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 13:29:22.902700  554049 provision.go:87] duration metric: took 418.129808ms to configureAuth
	I1109 13:29:22.902746  554049 buildroot.go:189] setting minikube options for container-runtime
	I1109 13:29:22.903033  554049 config.go:182] Loaded profile config "addons-640912": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:29:22.907271  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.907853  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:22.907882  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.908152  554049 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:22.908394  554049 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I1109 13:29:22.908415  554049 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 13:29:23.187121  554049 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 13:29:23.187169  554049 machine.go:97] duration metric: took 1.136996743s to provisionDockerMachine
	I1109 13:29:23.187186  554049 client.go:176] duration metric: took 21.597936799s to LocalClient.Create
	I1109 13:29:23.187206  554049 start.go:167] duration metric: took 21.598018749s to libmachine.API.Create "addons-640912"
	I1109 13:29:23.187218  554049 start.go:293] postStartSetup for "addons-640912" (driver="kvm2")
	I1109 13:29:23.187233  554049 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 13:29:23.187304  554049 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 13:29:23.190951  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.191437  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:23.191471  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.191673  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:23.284957  554049 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 13:29:23.290908  554049 info.go:137] Remote host: Buildroot 2025.02
	I1109 13:29:23.290944  554049 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-549598/.minikube/addons for local assets ...
	I1109 13:29:23.291033  554049 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-549598/.minikube/files for local assets ...
	I1109 13:29:23.291059  554049 start.go:296] duration metric: took 103.83477ms for postStartSetup
	I1109 13:29:23.294496  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.294979  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:23.295007  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.295298  554049 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/config.json ...
	I1109 13:29:23.295545  554049 start.go:128] duration metric: took 21.708191701s to createHost
	I1109 13:29:23.298433  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.298897  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:23.298929  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.299160  554049 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:23.299426  554049 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I1109 13:29:23.299443  554049 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1109 13:29:23.418842  554049 main.go:143] libmachine: SSH cmd err, output: <nil>: 1762694963.374684002
	
	I1109 13:29:23.418873  554049 fix.go:216] guest clock: 1762694963.374684002
	I1109 13:29:23.418882  554049 fix.go:229] Guest: 2025-11-09 13:29:23.374684002 +0000 UTC Remote: 2025-11-09 13:29:23.295558762 +0000 UTC m=+21.824848523 (delta=79.12524ms)
	I1109 13:29:23.418901  554049 fix.go:200] guest clock delta is within tolerance: 79.12524ms
	I1109 13:29:23.418908  554049 start.go:83] releasing machines lock for "addons-640912", held for 21.831643055s
	I1109 13:29:23.422763  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.423397  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:23.423435  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.424204  554049 ssh_runner.go:195] Run: cat /version.json
	I1109 13:29:23.424308  554049 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 13:29:23.428595  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.428753  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.429413  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:23.429427  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:23.429458  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.429456  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.429725  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:23.430070  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:23.517677  554049 ssh_runner.go:195] Run: systemctl --version
	I1109 13:29:23.548521  554049 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 13:29:23.725456  554049 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 13:29:23.734446  554049 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 13:29:23.734539  554049 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 13:29:23.762212  554049 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1109 13:29:23.762264  554049 start.go:496] detecting cgroup driver to use...
	I1109 13:29:23.762376  554049 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 13:29:23.789312  554049 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 13:29:23.811825  554049 docker.go:218] disabling cri-docker service (if available) ...
	I1109 13:29:23.811901  554049 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 13:29:23.835122  554049 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 13:29:23.857937  554049 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 13:29:24.036028  554049 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 13:29:24.258493  554049 docker.go:234] disabling docker service ...
	I1109 13:29:24.258579  554049 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 13:29:24.279139  554049 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 13:29:24.297344  554049 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 13:29:24.474651  554049 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 13:29:24.636841  554049 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 13:29:24.655525  554049 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 13:29:24.685964  554049 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 13:29:24.686029  554049 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:24.703327  554049 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 13:29:24.703429  554049 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:24.723542  554049 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:24.741826  554049 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:24.759313  554049 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 13:29:24.777075  554049 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:24.793754  554049 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:24.819676  554049 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:24.834925  554049 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 13:29:24.851569  554049 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1109 13:29:24.851655  554049 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1109 13:29:24.880394  554049 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 13:29:24.896630  554049 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:29:25.060770  554049 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 13:29:25.189371  554049 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 13:29:25.189516  554049 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 13:29:25.198545  554049 start.go:564] Will wait 60s for crictl version
	I1109 13:29:25.198660  554049 ssh_runner.go:195] Run: which crictl
	I1109 13:29:25.205416  554049 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1109 13:29:25.257021  554049 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1109 13:29:25.257128  554049 ssh_runner.go:195] Run: crio --version
	I1109 13:29:25.294847  554049 ssh_runner.go:195] Run: crio --version
	I1109 13:29:25.335910  554049 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1109 13:29:25.340895  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:25.341471  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:25.341501  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:25.341823  554049 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1109 13:29:25.348236  554049 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:29:25.368735  554049 kubeadm.go:884] updating cluster {Name:addons-640912 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-640912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 13:29:25.368898  554049 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:29:25.368946  554049 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:29:25.416200  554049 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1109 13:29:25.416292  554049 ssh_runner.go:195] Run: which lz4
	I1109 13:29:25.422425  554049 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1109 13:29:25.429188  554049 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1109 13:29:25.429238  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1109 13:29:27.484253  554049 crio.go:462] duration metric: took 2.061869484s to copy over tarball
	I1109 13:29:27.484374  554049 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1109 13:29:29.665537  554049 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.181121034s)
	I1109 13:29:29.665572  554049 crio.go:469] duration metric: took 2.181275636s to extract the tarball
	I1109 13:29:29.665583  554049 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1109 13:29:29.711172  554049 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:29:29.767518  554049 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 13:29:29.767551  554049 cache_images.go:86] Images are preloaded, skipping loading
	I1109 13:29:29.767560  554049 kubeadm.go:935] updating node { 192.168.39.228 8443 v1.34.1 crio true true} ...
	I1109 13:29:29.767658  554049 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-640912 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-640912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 13:29:29.767738  554049 ssh_runner.go:195] Run: crio config
	I1109 13:29:29.827752  554049 cni.go:84] Creating CNI manager for ""
	I1109 13:29:29.827802  554049 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1109 13:29:29.827828  554049 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 13:29:29.827856  554049 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.228 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-640912 NodeName:addons-640912 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 13:29:29.828036  554049 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-640912"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.228"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.228"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 13:29:29.828128  554049 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 13:29:29.842993  554049 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 13:29:29.843074  554049 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 13:29:29.857721  554049 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1109 13:29:29.885131  554049 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 13:29:29.910962  554049 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1109 13:29:29.937168  554049 ssh_runner.go:195] Run: grep 192.168.39.228	control-plane.minikube.internal$ /etc/hosts
	I1109 13:29:29.942897  554049 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:29:29.961825  554049 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:29:30.136776  554049 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:29:30.177152  554049 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912 for IP: 192.168.39.228
	I1109 13:29:30.177200  554049 certs.go:195] generating shared ca certs ...
	I1109 13:29:30.177243  554049 certs.go:227] acquiring lock for ca certs: {Name:mkc766226c1ec8ac0cc61519ae61374bb0aa3b29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:30.177612  554049 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key
	I1109 13:29:30.526469  554049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt ...
	I1109 13:29:30.526517  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt: {Name:mk1e1ec152f9e7533279dd061df1b855d91797d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:30.526783  554049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key ...
	I1109 13:29:30.526817  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key: {Name:mkb474930e06e0f2d9550b3e47f06fa0412d8c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:30.526988  554049 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key
	I1109 13:29:31.103187  554049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.crt ...
	I1109 13:29:31.103229  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.crt: {Name:mkeee8761eaad8a6feacfb3f1772dbd1f57cdfd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:31.103462  554049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key ...
	I1109 13:29:31.103479  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key: {Name:mke1379b4418067ce1a11d365cf664bfd6b63fb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:31.103597  554049 certs.go:257] generating profile certs ...
	I1109 13:29:31.103681  554049 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.key
	I1109 13:29:31.103717  554049 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt with IP's: []
	I1109 13:29:31.393629  554049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt ...
	I1109 13:29:31.393668  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: {Name:mk1aadbe63d88684ddb1deb4c7d25f36cf84bd13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:31.393894  554049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.key ...
	I1109 13:29:31.393913  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.key: {Name:mkb1582a5c9747ad241e1432ddae43398ee47c0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:31.393997  554049 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.key.1735871a
	I1109 13:29:31.394017  554049 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.crt.1735871a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.228]
	I1109 13:29:31.559744  554049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.crt.1735871a ...
	I1109 13:29:31.559782  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.crt.1735871a: {Name:mk1cf5afc9dcb9c29b6fdbc1d8dbda4b8a0ad1cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:31.560029  554049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.key.1735871a ...
	I1109 13:29:31.560045  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.key.1735871a: {Name:mk6eb23d524b1cf83b979febf555dc8a2670dd3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:31.560132  554049 certs.go:382] copying /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.crt.1735871a -> /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.crt
	I1109 13:29:31.560210  554049 certs.go:386] copying /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.key.1735871a -> /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.key
	I1109 13:29:31.560259  554049 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/proxy-client.key
	I1109 13:29:31.560279  554049 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/proxy-client.crt with IP's: []
	I1109 13:29:31.942231  554049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/proxy-client.crt ...
	I1109 13:29:31.942265  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/proxy-client.crt: {Name:mk94f935952bddbb6d98595db3e977b6c297b768 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:31.942468  554049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/proxy-client.key ...
	I1109 13:29:31.942486  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/proxy-client.key: {Name:mk5d96b637c774a9f2904169fcbe11646a6b30aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:31.942692  554049 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca-key.pem (1675 bytes)
	I1109 13:29:31.942732  554049 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem (1082 bytes)
	I1109 13:29:31.942757  554049 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem (1123 bytes)
	I1109 13:29:31.942779  554049 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/key.pem (1679 bytes)
	I1109 13:29:31.943571  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 13:29:31.984717  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 13:29:32.026837  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 13:29:32.064723  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 13:29:32.101911  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1109 13:29:32.137993  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 13:29:32.174693  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 13:29:32.211419  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 13:29:32.251159  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 13:29:32.289056  554049 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 13:29:32.315536  554049 ssh_runner.go:195] Run: openssl version
	I1109 13:29:32.323702  554049 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 13:29:32.341220  554049 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:29:32.348539  554049 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:29:32.348611  554049 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:29:32.357914  554049 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 13:29:32.374131  554049 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 13:29:32.380778  554049 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 13:29:32.380859  554049 kubeadm.go:401] StartCluster: {Name:addons-640912 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-640912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:29:32.380939  554049 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:29:32.381031  554049 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:29:32.437303  554049 cri.go:89] found id: ""
	I1109 13:29:32.437443  554049 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 13:29:32.455469  554049 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 13:29:32.474586  554049 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 13:29:32.490883  554049 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 13:29:32.490932  554049 kubeadm.go:158] found existing configuration files:
	
	I1109 13:29:32.490984  554049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 13:29:32.507029  554049 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 13:29:32.507106  554049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 13:29:32.526992  554049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 13:29:32.542026  554049 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 13:29:32.542094  554049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 13:29:32.557462  554049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 13:29:32.571729  554049 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 13:29:32.571847  554049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 13:29:32.586445  554049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 13:29:32.600460  554049 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 13:29:32.600546  554049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 13:29:32.615463  554049 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1109 13:29:32.797507  554049 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 13:29:45.565254  554049 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1109 13:29:45.565352  554049 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 13:29:45.565451  554049 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 13:29:45.565580  554049 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 13:29:45.565676  554049 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1109 13:29:45.565749  554049 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 13:29:45.567749  554049 out.go:252]   - Generating certificates and keys ...
	I1109 13:29:45.567923  554049 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 13:29:45.568028  554049 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 13:29:45.568143  554049 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 13:29:45.568242  554049 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1109 13:29:45.568335  554049 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1109 13:29:45.568419  554049 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 13:29:45.568504  554049 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 13:29:45.568659  554049 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-640912 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	I1109 13:29:45.568749  554049 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 13:29:45.568968  554049 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-640912 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	I1109 13:29:45.569082  554049 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 13:29:45.569231  554049 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 13:29:45.569297  554049 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 13:29:45.569348  554049 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 13:29:45.569405  554049 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 13:29:45.569456  554049 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1109 13:29:45.569506  554049 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 13:29:45.569621  554049 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 13:29:45.569729  554049 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 13:29:45.569897  554049 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 13:29:45.570022  554049 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 13:29:45.571849  554049 out.go:252]   - Booting up control plane ...
	I1109 13:29:45.572019  554049 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 13:29:45.572139  554049 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 13:29:45.572306  554049 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 13:29:45.572529  554049 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 13:29:45.572725  554049 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1109 13:29:45.572929  554049 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1109 13:29:45.573081  554049 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 13:29:45.573164  554049 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 13:29:45.573346  554049 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1109 13:29:45.573511  554049 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1109 13:29:45.573612  554049 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002203832s
	I1109 13:29:45.573738  554049 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1109 13:29:45.573866  554049 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.228:8443/livez
	I1109 13:29:45.574035  554049 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1109 13:29:45.574163  554049 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1109 13:29:45.574287  554049 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.851777111s
	I1109 13:29:45.574388  554049 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.338951376s
	I1109 13:29:45.574496  554049 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.505486059s
	I1109 13:29:45.574629  554049 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 13:29:45.574811  554049 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 13:29:45.574907  554049 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 13:29:45.575162  554049 kubeadm.go:319] [mark-control-plane] Marking the node addons-640912 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 13:29:45.575281  554049 kubeadm.go:319] [bootstrap-token] Using token: law7ws.rcnk7pdq4fp4bzd0
	I1109 13:29:45.577265  554049 out.go:252]   - Configuring RBAC rules ...
	I1109 13:29:45.577435  554049 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 13:29:45.577562  554049 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 13:29:45.577700  554049 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 13:29:45.577922  554049 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 13:29:45.578078  554049 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 13:29:45.578193  554049 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 13:29:45.578331  554049 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 13:29:45.578397  554049 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1109 13:29:45.578469  554049 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1109 13:29:45.578482  554049 kubeadm.go:319] 
	I1109 13:29:45.578574  554049 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1109 13:29:45.578586  554049 kubeadm.go:319] 
	I1109 13:29:45.578689  554049 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1109 13:29:45.578704  554049 kubeadm.go:319] 
	I1109 13:29:45.578741  554049 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1109 13:29:45.578846  554049 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 13:29:45.578924  554049 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 13:29:45.578942  554049 kubeadm.go:319] 
	I1109 13:29:45.579023  554049 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1109 13:29:45.579033  554049 kubeadm.go:319] 
	I1109 13:29:45.579099  554049 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 13:29:45.579142  554049 kubeadm.go:319] 
	I1109 13:29:45.579228  554049 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1109 13:29:45.579340  554049 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 13:29:45.579492  554049 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 13:29:45.579520  554049 kubeadm.go:319] 
	I1109 13:29:45.579616  554049 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 13:29:45.579730  554049 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1109 13:29:45.579758  554049 kubeadm.go:319] 
	I1109 13:29:45.579905  554049 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token law7ws.rcnk7pdq4fp4bzd0 \
	I1109 13:29:45.580053  554049 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8c9d3f71532339ff1f74fe1baae16b7e7ca1ed75ea1b3aa4741816f874e79d71 \
	I1109 13:29:45.580099  554049 kubeadm.go:319] 	--control-plane 
	I1109 13:29:45.580109  554049 kubeadm.go:319] 
	I1109 13:29:45.580227  554049 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1109 13:29:45.580245  554049 kubeadm.go:319] 
	I1109 13:29:45.580332  554049 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token law7ws.rcnk7pdq4fp4bzd0 \
	I1109 13:29:45.580500  554049 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8c9d3f71532339ff1f74fe1baae16b7e7ca1ed75ea1b3aa4741816f874e79d71 
	I1109 13:29:45.580520  554049 cni.go:84] Creating CNI manager for ""
	I1109 13:29:45.580531  554049 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1109 13:29:45.582621  554049 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1109 13:29:45.584213  554049 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1109 13:29:45.605204  554049 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1109 13:29:45.642398  554049 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 13:29:45.642500  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:45.642500  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-640912 minikube.k8s.io/updated_at=2025_11_09T13_29_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=addons-640912 minikube.k8s.io/primary=true
	I1109 13:29:45.724991  554049 ops.go:34] apiserver oom_adj: -16
	I1109 13:29:45.860983  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:46.361977  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:46.862176  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:47.362111  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:47.861219  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:48.361843  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:48.861251  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:49.362130  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:49.861782  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:50.361731  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:50.861735  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:50.977332  554049 kubeadm.go:1114] duration metric: took 5.33492595s to wait for elevateKubeSystemPrivileges
	I1109 13:29:50.977378  554049 kubeadm.go:403] duration metric: took 18.596524599s to StartCluster
	I1109 13:29:50.977400  554049 settings.go:142] acquiring lock: {Name:mkb59fcf785d78efbba1217c69544ee37b77198f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:50.977564  554049 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-549598/kubeconfig
	I1109 13:29:50.978027  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/kubeconfig: {Name:mka7e7e8d5d1d87facf220110c90862a74355591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:50.978280  554049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 13:29:50.978317  554049 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 13:29:50.978398  554049 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1109 13:29:50.978554  554049 addons.go:70] Setting yakd=true in profile "addons-640912"
	I1109 13:29:50.978574  554049 config.go:182] Loaded profile config "addons-640912": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:29:50.978592  554049 addons.go:70] Setting metrics-server=true in profile "addons-640912"
	I1109 13:29:50.978604  554049 addons.go:239] Setting addon metrics-server=true in "addons-640912"
	I1109 13:29:50.978583  554049 addons.go:70] Setting inspektor-gadget=true in profile "addons-640912"
	I1109 13:29:50.978629  554049 addons.go:70] Setting ingress=true in profile "addons-640912"
	I1109 13:29:50.978638  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.978640  554049 addons.go:70] Setting ingress-dns=true in profile "addons-640912"
	I1109 13:29:50.978642  554049 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-640912"
	I1109 13:29:50.978649  554049 addons.go:239] Setting addon ingress=true in "addons-640912"
	I1109 13:29:50.978587  554049 addons.go:70] Setting default-storageclass=true in profile "addons-640912"
	I1109 13:29:50.978657  554049 addons.go:239] Setting addon ingress-dns=true in "addons-640912"
	I1109 13:29:50.978690  554049 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-640912"
	I1109 13:29:50.978715  554049 addons.go:70] Setting storage-provisioner=true in profile "addons-640912"
	I1109 13:29:50.978728  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.978738  554049 addons.go:239] Setting addon storage-provisioner=true in "addons-640912"
	I1109 13:29:50.978757  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.978772  554049 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-640912"
	I1109 13:29:50.978822  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.978632  554049 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-640912"
	I1109 13:29:50.980356  554049 addons.go:70] Setting volumesnapshots=true in profile "addons-640912"
	I1109 13:29:50.980394  554049 addons.go:239] Setting addon volumesnapshots=true in "addons-640912"
	I1109 13:29:50.980427  554049 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-640912"
	I1109 13:29:50.980439  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.980467  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.980505  554049 addons.go:70] Setting registry=true in profile "addons-640912"
	I1109 13:29:50.980528  554049 addons.go:239] Setting addon registry=true in "addons-640912"
	I1109 13:29:50.980562  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.978694  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.980786  554049 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-640912"
	I1109 13:29:50.980833  554049 addons.go:70] Setting registry-creds=true in profile "addons-640912"
	I1109 13:29:50.980872  554049 addons.go:239] Setting addon registry-creds=true in "addons-640912"
	I1109 13:29:50.980911  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.980979  554049 addons.go:70] Setting gcp-auth=true in profile "addons-640912"
	I1109 13:29:50.981016  554049 mustload.go:66] Loading cluster: addons-640912
	I1109 13:29:50.981255  554049 config.go:182] Loaded profile config "addons-640912": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:29:50.980845  554049 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-640912"
	I1109 13:29:50.981636  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.978582  554049 addons.go:239] Setting addon yakd=true in "addons-640912"
	I1109 13:29:50.981926  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.982378  554049 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-640912"
	I1109 13:29:50.982484  554049 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-640912"
	I1109 13:29:50.978638  554049 addons.go:70] Setting cloud-spanner=true in profile "addons-640912"
	I1109 13:29:50.983108  554049 out.go:179] * Verifying Kubernetes components...
	I1109 13:29:50.983376  554049 addons.go:239] Setting addon cloud-spanner=true in "addons-640912"
	I1109 13:29:50.983433  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.978647  554049 addons.go:239] Setting addon inspektor-gadget=true in "addons-640912"
	I1109 13:29:50.983505  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.983818  554049 addons.go:70] Setting volcano=true in profile "addons-640912"
	I1109 13:29:50.983847  554049 addons.go:239] Setting addon volcano=true in "addons-640912"
	I1109 13:29:50.983888  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.985888  554049 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:29:50.988835  554049 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 13:29:50.988855  554049 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1109 13:29:50.988850  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1109 13:29:50.990206  554049 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 13:29:50.990229  554049 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1109 13:29:50.990256  554049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1109 13:29:50.990263  554049 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1109 13:29:50.990235  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 13:29:50.990626  554049 addons.go:239] Setting addon default-storageclass=true in "addons-640912"
	I1109 13:29:50.990686  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.990970  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.992322  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1109 13:29:50.992399  554049 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1109 13:29:50.992401  554049 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1109 13:29:50.992422  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1109 13:29:50.994043  554049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1109 13:29:50.994224  554049 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1109 13:29:50.994233  554049 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1109 13:29:50.994273  554049 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1109 13:29:50.994288  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1109 13:29:50.994234  554049 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1109 13:29:50.995216  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	W1109 13:29:50.994377  554049 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1109 13:29:50.994419  554049 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-640912"
	I1109 13:29:50.995510  554049 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1109 13:29:50.995521  554049 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1109 13:29:50.995518  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.995532  554049 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1109 13:29:50.995544  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1109 13:29:50.997116  554049 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1109 13:29:50.995622  554049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1109 13:29:50.996314  554049 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1109 13:29:50.996379  554049 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1109 13:29:50.996881  554049 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 13:29:50.997509  554049 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 13:29:50.997520  554049 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1109 13:29:50.997721  554049 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1109 13:29:50.997770  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1109 13:29:50.997196  554049 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1109 13:29:50.997851  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1109 13:29:50.998125  554049 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1109 13:29:50.998204  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1109 13:29:50.998240  554049 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1109 13:29:50.998255  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1109 13:29:50.998809  554049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1109 13:29:51.000211  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1109 13:29:51.000212  554049 out.go:179]   - Using image docker.io/registry:3.0.0
	I1109 13:29:51.000409  554049 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1109 13:29:51.000431  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1109 13:29:51.001876  554049 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1109 13:29:51.001963  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1109 13:29:51.002318  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.003013  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1109 13:29:51.003111  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.004326  554049 out.go:179]   - Using image docker.io/busybox:stable
	I1109 13:29:51.004405  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.004447  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.005229  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.005291  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.005684  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.005724  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1109 13:29:51.006264  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.006874  554049 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1109 13:29:51.007777  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.008155  554049 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1109 13:29:51.008180  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1109 13:29:51.008195  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1109 13:29:51.008364  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.009951  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.009992  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.010700  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.010752  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.010781  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1109 13:29:51.010862  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.011032  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.011751  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.012288  554049 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1109 13:29:51.012399  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.012548  554049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1109 13:29:51.012598  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.012686  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.012719  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.013507  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.013623  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.013651  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.013707  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.014346  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.014367  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.014370  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.014490  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.014521  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.014605  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.015131  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.015563  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.015596  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.015899  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.016519  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.016594  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.016603  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.016624  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.016745  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.016827  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.016931  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.017041  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.017091  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.017453  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.017728  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.017778  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.017836  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.017934  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.017973  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.018186  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.019148  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.020575  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.021218  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.021220  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.021272  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.021523  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.021963  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.021994  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.022184  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	W1109 13:29:51.386846  554049 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57418->192.168.39.228:22: read: connection reset by peer
	I1109 13:29:51.386892  554049 retry.go:31] will retry after 222.983762ms: ssh: handshake failed: read tcp 192.168.39.1:57418->192.168.39.228:22: read: connection reset by peer
	W1109 13:29:51.444433  554049 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57434->192.168.39.228:22: read: connection reset by peer
	I1109 13:29:51.444480  554049 retry.go:31] will retry after 227.572873ms: ssh: handshake failed: read tcp 192.168.39.1:57434->192.168.39.228:22: read: connection reset by peer
	W1109 13:29:51.612303  554049 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57454->192.168.39.228:22: read: connection reset by peer
	I1109 13:29:51.612342  554049 retry.go:31] will retry after 211.681358ms: ssh: handshake failed: read tcp 192.168.39.1:57454->192.168.39.228:22: read: connection reset by peer
	I1109 13:29:52.010077  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 13:29:52.235070  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1109 13:29:52.331852  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 13:29:52.352738  554049 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1109 13:29:52.352773  554049 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1109 13:29:52.395388  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1109 13:29:52.440660  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1109 13:29:52.445961  554049 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1109 13:29:52.446002  554049 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1109 13:29:52.448181  554049 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1109 13:29:52.448236  554049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1109 13:29:52.544737  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1109 13:29:52.551025  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1109 13:29:52.566441  554049 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1109 13:29:52.566471  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1109 13:29:52.632342  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1109 13:29:52.729889  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1109 13:29:53.011917  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1109 13:29:53.259319  554049 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (2.280994349s)
	I1109 13:29:53.259435  554049 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (2.273499892s)
	I1109 13:29:53.259518  554049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 13:29:53.259530  554049 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:29:53.377079  554049 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1109 13:29:53.377125  554049 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1109 13:29:53.410957  554049 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1109 13:29:53.410995  554049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1109 13:29:53.492668  554049 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1109 13:29:53.492714  554049 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1109 13:29:53.541620  554049 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1109 13:29:53.541665  554049 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1109 13:29:53.652096  554049 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1109 13:29:53.652133  554049 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1109 13:29:53.995555  554049 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1109 13:29:53.995587  554049 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1109 13:29:54.033651  554049 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1109 13:29:54.033695  554049 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1109 13:29:54.067822  554049 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1109 13:29:54.067856  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1109 13:29:54.196207  554049 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1109 13:29:54.196244  554049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1109 13:29:54.227433  554049 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1109 13:29:54.227464  554049 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1109 13:29:54.679076  554049 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1109 13:29:54.679121  554049 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1109 13:29:54.696117  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1109 13:29:54.741459  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1109 13:29:54.881208  554049 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1109 13:29:54.881247  554049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1109 13:29:54.915127  554049 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1109 13:29:54.915176  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1109 13:29:55.272351  554049 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1109 13:29:55.272388  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1109 13:29:55.364173  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1109 13:29:55.383308  554049 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1109 13:29:55.383345  554049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1109 13:29:56.059938  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1109 13:29:56.248473  554049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1109 13:29:56.248504  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1109 13:29:57.014690  554049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1109 13:29:57.014726  554049 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1109 13:29:57.519712  554049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1109 13:29:57.519740  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1109 13:29:58.054597  554049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1109 13:29:58.054639  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1109 13:29:58.434364  554049 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1109 13:29:58.438873  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:58.439831  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:58.439910  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:58.440311  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:58.622773  554049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1109 13:29:58.622820  554049 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1109 13:29:59.371356  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1109 13:29:59.505293  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.495157485s)
	I1109 13:29:59.785061  554049 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1109 13:30:00.392747  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (8.157615738s)
	I1109 13:30:00.392753  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.060856374s)
	I1109 13:30:00.392830  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.997405175s)
	I1109 13:30:00.392922  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.952211987s)
	I1109 13:30:00.738340  554049 addons.go:239] Setting addon gcp-auth=true in "addons-640912"
	I1109 13:30:00.738422  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:30:00.741137  554049 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1109 13:30:00.745233  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:30:00.746101  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:30:00.746150  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:30:00.746504  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:30:04.324164  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.779383152s)
	I1109 13:30:04.324212  554049 addons.go:480] Verifying addon ingress=true in "addons-640912"
	I1109 13:30:04.324312  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (11.691928557s)
	I1109 13:30:04.324279  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.773208765s)
	I1109 13:30:04.324397  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (11.594469361s)
	I1109 13:30:04.324471  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.312528576s)
	I1109 13:30:04.324546  554049 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (11.064996826s)
	I1109 13:30:04.324573  554049 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (11.065034447s)
	I1109 13:30:04.324596  554049 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1109 13:30:04.324784  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.628636294s)
	I1109 13:30:04.324825  554049 addons.go:480] Verifying addon metrics-server=true in "addons-640912"
	I1109 13:30:04.324875  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.583378818s)
	I1109 13:30:04.324892  554049 addons.go:480] Verifying addon registry=true in "addons-640912"
	I1109 13:30:04.325127  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.2651527s)
	W1109 13:30:04.325346  554049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1109 13:30:04.325153  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.960820727s)
	I1109 13:30:04.325383  554049 retry.go:31] will retry after 202.022969ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1109 13:30:04.325661  554049 node_ready.go:35] waiting up to 6m0s for node "addons-640912" to be "Ready" ...
	I1109 13:30:04.325983  554049 out.go:179] * Verifying ingress addon...
	I1109 13:30:04.326814  554049 out.go:179] * Verifying registry addon...
	I1109 13:30:04.327420  554049 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-640912 service yakd-dashboard -n yakd-dashboard
	
	I1109 13:30:04.328195  554049 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1109 13:30:04.328903  554049 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1109 13:30:04.421113  554049 node_ready.go:49] node "addons-640912" is "Ready"
	I1109 13:30:04.421170  554049 node_ready.go:38] duration metric: took 95.473426ms for node "addons-640912" to be "Ready" ...
	I1109 13:30:04.421193  554049 api_server.go:52] waiting for apiserver process to appear ...
	I1109 13:30:04.421252  554049 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:30:04.436573  554049 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1109 13:30:04.436601  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:04.437324  554049 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1109 13:30:04.437349  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:04.527734  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1109 13:30:04.850888  554049 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-640912" context rescaled to 1 replicas
	I1109 13:30:04.887833  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:04.891917  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:05.342314  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:05.346335  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:05.863995  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:05.864036  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:06.396694  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:06.402111  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:06.554519  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.183096689s)
	I1109 13:30:06.554582  554049 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-640912"
	I1109 13:30:06.554594  554049 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.813405865s)
	I1109 13:30:06.554623  554049 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.133355636s)
	I1109 13:30:06.554651  554049 api_server.go:72] duration metric: took 15.57629663s to wait for apiserver process to appear ...
	I1109 13:30:06.554661  554049 api_server.go:88] waiting for apiserver healthz status ...
	I1109 13:30:06.554691  554049 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I1109 13:30:06.556403  554049 out.go:179] * Verifying csi-hostpath-driver addon...
	I1109 13:30:06.556401  554049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1109 13:30:06.559165  554049 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1109 13:30:06.559901  554049 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1109 13:30:06.560844  554049 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1109 13:30:06.560881  554049 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1109 13:30:06.598285  554049 api_server.go:279] https://192.168.39.228:8443/healthz returned 200:
	ok
	I1109 13:30:06.612843  554049 api_server.go:141] control plane version: v1.34.1
	I1109 13:30:06.612893  554049 api_server.go:131] duration metric: took 58.222701ms to wait for apiserver health ...
	I1109 13:30:06.612928  554049 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 13:30:06.645111  554049 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1109 13:30:06.645145  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:06.677129  554049 system_pods.go:59] 20 kube-system pods found
	I1109 13:30:06.677261  554049 system_pods.go:61] "amd-gpu-device-plugin-2tv7p" [0019249b-f40e-4609-b592-f9fcc146c80a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1109 13:30:06.677278  554049 system_pods.go:61] "coredns-66bc5c9577-s9xxb" [e5d6eb11-cd0f-4ef0-b1ae-938e4c32f04b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:30:06.677293  554049 system_pods.go:61] "coredns-66bc5c9577-xtt8z" [4c0e27e8-3047-4a17-9435-f9185e872696] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:30:06.677304  554049 system_pods.go:61] "csi-hostpath-attacher-0" [d822fdee-fb25-4634-83b9-e9da33b6b333] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1109 13:30:06.677316  554049 system_pods.go:61] "csi-hostpath-resizer-0" [3d5fea9b-7c9b-4665-ac68-5e296d36729f] Pending
	I1109 13:30:06.677326  554049 system_pods.go:61] "csi-hostpathplugin-9dzzw" [ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1109 13:30:06.677338  554049 system_pods.go:61] "etcd-addons-640912" [be48210e-3e9d-4a68-b5a4-80f4a26fa4be] Running
	I1109 13:30:06.677344  554049 system_pods.go:61] "kube-apiserver-addons-640912" [066566b1-566d-491b-8be7-e1bf16b2ecb1] Running
	I1109 13:30:06.677349  554049 system_pods.go:61] "kube-controller-manager-addons-640912" [daaf7f94-d2de-42ec-8cd7-37bae6ec43ad] Running
	I1109 13:30:06.677359  554049 system_pods.go:61] "kube-ingress-dns-minikube" [fa72b9e2-abd1-49dd-b3cb-155aafc6e442] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 13:30:06.677369  554049 system_pods.go:61] "kube-proxy-8hbf4" [97813667-ffbc-4b8a-a122-3fa531d57ee3] Running
	I1109 13:30:06.677376  554049 system_pods.go:61] "kube-scheduler-addons-640912" [051715db-03e5-4cae-9b74-60fe58511b6b] Running
	I1109 13:30:06.677387  554049 system_pods.go:61] "metrics-server-85b7d694d7-lfl7n" [d4722f0c-b942-4bf8-88e4-a8d5b09f6fdf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 13:30:06.677399  554049 system_pods.go:61] "nvidia-device-plugin-daemonset-vhlxq" [b849210d-a4dd-4bfc-ad75-3bf99c214e37] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1109 13:30:06.677407  554049 system_pods.go:61] "registry-6b586f9694-wm87r" [59b2fc4f-c09e-47b3-ac30-a3dce9d1d9b1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1109 13:30:06.677419  554049 system_pods.go:61] "registry-creds-764b6fb674-z2sqx" [8e9dea64-3610-47e1-9a4d-1f13f275439e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:30:06.677434  554049 system_pods.go:61] "registry-proxy-d6q28" [bc34c737-250f-4084-859d-d21c43a619d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1109 13:30:06.677445  554049 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pgl85" [d9a227fb-a833-4bc3-928b-eacf5e94bd0f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:06.677474  554049 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qk9k2" [548acda2-9430-4b25-a3a8-09e0a17aa95f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:06.677489  554049 system_pods.go:61] "storage-provisioner" [59b766d9-adb7-4e0c-bc68-9d9cf8fbbdba] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 13:30:06.677500  554049 system_pods.go:74] duration metric: took 64.564101ms to wait for pod list to return data ...
	I1109 13:30:06.677515  554049 default_sa.go:34] waiting for default service account to be created ...
	I1109 13:30:06.698871  554049 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1109 13:30:06.698911  554049 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1109 13:30:06.723783  554049 default_sa.go:45] found service account: "default"
	I1109 13:30:06.723870  554049 default_sa.go:55] duration metric: took 46.344804ms for default service account to be created ...
	I1109 13:30:06.723888  554049 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 13:30:06.784361  554049 system_pods.go:86] 20 kube-system pods found
	I1109 13:30:06.784424  554049 system_pods.go:89] "amd-gpu-device-plugin-2tv7p" [0019249b-f40e-4609-b592-f9fcc146c80a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1109 13:30:06.784438  554049 system_pods.go:89] "coredns-66bc5c9577-s9xxb" [e5d6eb11-cd0f-4ef0-b1ae-938e4c32f04b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:30:06.784456  554049 system_pods.go:89] "coredns-66bc5c9577-xtt8z" [4c0e27e8-3047-4a17-9435-f9185e872696] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:30:06.784466  554049 system_pods.go:89] "csi-hostpath-attacher-0" [d822fdee-fb25-4634-83b9-e9da33b6b333] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1109 13:30:06.784474  554049 system_pods.go:89] "csi-hostpath-resizer-0" [3d5fea9b-7c9b-4665-ac68-5e296d36729f] Pending
	I1109 13:30:06.784485  554049 system_pods.go:89] "csi-hostpathplugin-9dzzw" [ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1109 13:30:06.784495  554049 system_pods.go:89] "etcd-addons-640912" [be48210e-3e9d-4a68-b5a4-80f4a26fa4be] Running
	I1109 13:30:06.784616  554049 system_pods.go:89] "kube-apiserver-addons-640912" [066566b1-566d-491b-8be7-e1bf16b2ecb1] Running
	I1109 13:30:06.784630  554049 system_pods.go:89] "kube-controller-manager-addons-640912" [daaf7f94-d2de-42ec-8cd7-37bae6ec43ad] Running
	I1109 13:30:06.784642  554049 system_pods.go:89] "kube-ingress-dns-minikube" [fa72b9e2-abd1-49dd-b3cb-155aafc6e442] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 13:30:06.784654  554049 system_pods.go:89] "kube-proxy-8hbf4" [97813667-ffbc-4b8a-a122-3fa531d57ee3] Running
	I1109 13:30:06.784663  554049 system_pods.go:89] "kube-scheduler-addons-640912" [051715db-03e5-4cae-9b74-60fe58511b6b] Running
	I1109 13:30:06.784714  554049 system_pods.go:89] "metrics-server-85b7d694d7-lfl7n" [d4722f0c-b942-4bf8-88e4-a8d5b09f6fdf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 13:30:06.784734  554049 system_pods.go:89] "nvidia-device-plugin-daemonset-vhlxq" [b849210d-a4dd-4bfc-ad75-3bf99c214e37] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1109 13:30:06.784749  554049 system_pods.go:89] "registry-6b586f9694-wm87r" [59b2fc4f-c09e-47b3-ac30-a3dce9d1d9b1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1109 13:30:06.784761  554049 system_pods.go:89] "registry-creds-764b6fb674-z2sqx" [8e9dea64-3610-47e1-9a4d-1f13f275439e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:30:06.784769  554049 system_pods.go:89] "registry-proxy-d6q28" [bc34c737-250f-4084-859d-d21c43a619d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1109 13:30:06.784779  554049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pgl85" [d9a227fb-a833-4bc3-928b-eacf5e94bd0f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:06.784787  554049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qk9k2" [548acda2-9430-4b25-a3a8-09e0a17aa95f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:06.784813  554049 system_pods.go:89] "storage-provisioner" [59b766d9-adb7-4e0c-bc68-9d9cf8fbbdba] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 13:30:06.784835  554049 system_pods.go:126] duration metric: took 60.936237ms to wait for k8s-apps to be running ...
	I1109 13:30:06.784852  554049 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 13:30:06.784957  554049 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 13:30:06.790756  554049 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1109 13:30:06.790815  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1109 13:30:06.855567  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:06.856076  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:06.996894  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1109 13:30:07.069630  554049 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1109 13:30:07.069669  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:07.357817  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:07.358129  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:07.585714  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:07.837469  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:07.842429  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:08.001868  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.47407204s)
	I1109 13:30:08.001935  554049 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.216918597s)
	I1109 13:30:08.001974  554049 system_svc.go:56] duration metric: took 1.217116528s WaitForService to wait for kubelet
	I1109 13:30:08.001988  554049 kubeadm.go:587] duration metric: took 17.023632052s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 13:30:08.002022  554049 node_conditions.go:102] verifying NodePressure condition ...
	I1109 13:30:08.013233  554049 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1109 13:30:08.013279  554049 node_conditions.go:123] node cpu capacity is 2
	I1109 13:30:08.013321  554049 node_conditions.go:105] duration metric: took 11.288216ms to run NodePressure ...
	I1109 13:30:08.013341  554049 start.go:242] waiting for startup goroutines ...
	I1109 13:30:08.072285  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:08.333086  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:08.336474  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:08.572226  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:08.887336  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.890385858s)
	I1109 13:30:08.888525  554049 addons.go:480] Verifying addon gcp-auth=true in "addons-640912"
	I1109 13:30:08.890860  554049 out.go:179] * Verifying gcp-auth addon...
	I1109 13:30:08.892713  554049 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1109 13:30:08.939244  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:08.939347  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:08.991310  554049 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1109 13:30:08.991337  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:09.098338  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:09.344858  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:09.345304  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:09.399368  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:09.570285  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:09.838385  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:09.840384  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:09.898869  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:10.065083  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:10.334202  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:10.334309  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:10.401950  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:10.569284  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:10.836515  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:10.838899  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:10.896313  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:11.067129  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:11.339416  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:11.340743  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:11.402448  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:11.566253  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:11.837985  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:11.838020  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:11.898902  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:12.066368  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:12.337501  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:12.338519  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:12.399240  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:12.571326  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:12.832263  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:12.838277  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:12.897716  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:13.073975  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:13.345785  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:13.348013  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:13.397374  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:13.564325  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:13.837325  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:13.843684  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:13.902254  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:14.068483  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:14.335320  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:14.338051  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:14.396277  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:14.565373  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:14.834165  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:14.834467  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:14.897445  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:15.064757  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:15.333021  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:15.333719  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:15.397830  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:15.566785  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:15.835560  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:15.838276  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:15.900642  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:16.067501  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:16.337462  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:16.337641  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:16.398587  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:16.566906  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:16.834191  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:16.834422  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:16.897896  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:17.066472  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:17.336985  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:17.337337  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:17.399227  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:17.565260  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:17.836508  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:17.837830  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:17.897337  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:18.065001  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:18.332999  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:18.335394  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:18.402571  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:18.564851  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:18.840456  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:18.843062  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:18.899589  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:19.068832  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:19.339870  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:19.341559  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:19.399386  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:19.587728  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:20.102869  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:20.102915  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:20.104530  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:20.104680  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:20.336692  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:20.336706  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:20.436134  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:20.563604  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:20.837295  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:20.843051  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:20.936258  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:21.065172  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:21.334067  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:21.335105  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:21.396790  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:21.564002  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:21.835247  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:21.835561  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:21.898139  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:22.070927  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:22.334447  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:22.334961  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:22.396866  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:22.567180  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:22.840032  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:22.840068  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:22.896778  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:23.071532  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:23.339919  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:23.340496  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:23.397236  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:23.566063  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:23.839678  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:23.841282  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:23.901245  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:24.071668  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:24.334636  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:24.335846  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:24.398620  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:24.567631  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:24.836032  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:24.836151  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:24.935042  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:25.065721  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:25.336610  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:25.337364  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:25.400426  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:25.566021  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:25.836480  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:25.838214  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:25.902147  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:26.071427  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:26.338573  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:26.338582  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:26.398771  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:26.565358  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:26.836720  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:26.840552  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:26.901096  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:27.067504  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:27.339750  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:27.341731  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:27.402242  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:27.569891  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:27.833392  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:27.833537  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:27.906589  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:28.065108  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:28.337155  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:28.337297  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:28.397195  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:28.566495  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:28.904921  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:28.907409  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:28.907434  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:29.072857  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:29.334467  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:29.336353  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:29.399920  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:29.566093  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:29.837017  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:29.840579  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:29.902450  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:30.065577  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:30.719201  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:30.724919  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:30.724939  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:30.724986  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:30.833958  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:30.834194  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:30.900339  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:31.065316  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:31.333087  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:31.333171  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:31.398332  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:31.564881  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:31.833924  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:31.837095  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:31.897424  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:32.069730  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:32.337945  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:32.340042  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:32.401234  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:32.567187  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:32.843640  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:32.847045  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:32.898376  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:33.069348  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:33.334614  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:33.339537  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:33.398429  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:33.566402  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:34.077754  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:34.078072  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:34.078683  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:34.079618  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:34.334588  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:34.337189  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:34.397855  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:34.572190  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:34.849654  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:34.849861  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:34.896948  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:35.074479  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:35.348183  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:35.356209  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:35.411951  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:35.570590  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:35.845515  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:35.845555  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:35.905142  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:36.071389  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:36.338701  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:36.340912  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:36.400596  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:36.568710  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:36.911585  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:36.915424  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:36.916949  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:37.067760  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:37.336355  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:37.339107  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:37.398618  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:37.569194  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:37.845063  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:37.845916  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:37.899000  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:38.067362  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:38.334562  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:38.336207  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:38.400168  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:38.571573  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:38.973809  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:38.974144  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:38.974151  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:39.068129  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:39.333195  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:39.335360  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:39.397654  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:39.564893  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:39.833320  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:39.839558  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:39.898484  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:40.065477  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:40.340552  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:40.341676  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:40.397951  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:40.568140  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:40.845076  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:40.845487  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:40.898753  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:41.071899  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:41.346589  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:41.359208  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:41.403903  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:41.571002  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:41.833974  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:41.837788  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:41.898684  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:42.069463  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:42.335582  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:42.338032  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:42.398193  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:42.565475  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:42.835535  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:42.837038  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:42.937572  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:43.073282  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:43.339090  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:43.339461  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:43.396901  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:43.586382  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:43.838864  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:43.843579  554049 kapi.go:107] duration metric: took 39.514673062s to wait for kubernetes.io/minikube-addons=registry ...
	I1109 13:30:43.905634  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:44.064934  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:44.332975  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:44.396420  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:44.571769  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:44.833998  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:44.897776  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:45.068095  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:45.344379  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:45.402752  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:45.574628  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:45.837165  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:45.899358  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:46.067886  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:46.335065  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:46.403112  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:46.577103  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:46.839115  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:46.896120  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:47.076119  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:47.350771  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:47.401893  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:47.571338  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:47.837062  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:47.896673  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:48.066817  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:48.337456  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:48.398614  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:48.565456  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:48.833611  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:48.897408  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:49.064823  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:49.335724  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:49.408948  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:49.565312  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:49.833445  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:49.898385  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:50.064095  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:50.334339  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:50.397598  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:50.569309  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:50.836332  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:50.898692  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:51.066221  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:51.480743  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:51.480846  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:51.568243  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:51.833039  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:51.933871  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:52.065619  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:52.335123  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:52.396946  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:52.566374  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:52.864538  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:52.956580  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:53.066131  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:53.340918  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:53.397918  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:53.570472  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:53.832824  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:53.899448  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:54.065472  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:54.332326  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:54.397534  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:54.568817  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:54.832947  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:54.901046  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:55.064454  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:55.335531  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:55.399529  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:55.569216  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:55.838545  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:55.905412  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:56.067458  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:56.334763  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:56.402225  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:56.768475  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:56.835262  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:56.907870  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:57.067772  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:57.339379  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:57.439713  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:57.573371  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:57.839441  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:57.908300  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:58.068309  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:58.338714  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:58.401258  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:58.565431  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:58.832874  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:58.897895  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:59.076776  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:59.332886  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:59.401336  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:59.572413  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:59.836884  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:59.935930  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:00.205382  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:00.341292  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:00.396631  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:00.568505  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:00.837424  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:00.929421  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:01.069724  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:01.335835  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:01.400290  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:01.564385  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:01.833209  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:01.898880  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:02.067659  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:02.333957  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:02.401527  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:02.573124  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:02.843273  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:02.946887  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:03.068597  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:03.336581  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:03.399764  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:03.567632  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:04.070184  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:04.071224  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:04.075196  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:04.337446  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:04.437852  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:04.566623  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:04.849898  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:04.946693  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:05.069001  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:05.335428  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:05.401410  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:05.566746  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:05.850306  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:05.910812  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:06.073522  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:06.350358  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:06.398770  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:06.570578  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:06.835835  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:06.937150  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:07.070212  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:07.342676  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:07.441121  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:07.575162  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:07.843325  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:07.898217  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:08.069896  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:08.336282  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:08.436654  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:08.572085  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:08.836872  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:08.900081  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:09.066104  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:09.331853  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:09.400057  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:09.564879  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:09.873005  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:09.897692  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:10.066725  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:10.339369  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:10.399557  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:10.572087  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:10.838743  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:10.897458  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:11.067721  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:11.335546  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:11.397389  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:11.566619  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:11.839606  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:11.902886  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:12.068399  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:12.332049  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:12.401492  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:12.565507  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:12.835898  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:12.907128  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:13.066925  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:13.338046  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:13.400870  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:13.563107  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:13.834771  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:13.937396  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:14.068487  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:14.332717  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:14.399661  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:14.570753  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:14.833332  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:14.897424  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:15.204038  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:15.339763  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:15.397926  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:15.568164  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:15.836548  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:15.899073  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:16.066864  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:16.333466  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:16.397331  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:16.567861  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:16.833409  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:16.897614  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:17.070130  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:17.337400  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:17.401374  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:17.565736  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:17.841910  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:17.898786  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:18.070624  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:18.333680  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:18.401244  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:18.571631  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:18.833883  554049 kapi.go:107] duration metric: took 1m14.505685559s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1109 13:31:18.898024  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:19.073709  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:19.402477  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:19.565307  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:19.904075  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:20.068726  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:20.398760  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:20.565697  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:20.896731  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:21.064644  554049 kapi.go:107] duration metric: took 1m14.504756398s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1109 13:31:21.397137  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:21.897734  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:22.398588  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:23.010336  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:23.397591  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:23.902542  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:24.399075  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:24.897122  554049 kapi.go:107] duration metric: took 1m16.004408046s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1109 13:31:24.898930  554049 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-640912 cluster.
	I1109 13:31:24.900363  554049 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1109 13:31:24.901752  554049 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1109 13:31:24.903272  554049 out.go:179] * Enabled addons: storage-provisioner, inspektor-gadget, nvidia-device-plugin, registry-creds, default-storageclass, amd-gpu-device-plugin, cloud-spanner, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1109 13:31:24.904711  554049 addons.go:515] duration metric: took 1m33.926303204s for enable addons: enabled=[storage-provisioner inspektor-gadget nvidia-device-plugin registry-creds default-storageclass amd-gpu-device-plugin cloud-spanner ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1109 13:31:24.904783  554049 start.go:247] waiting for cluster config update ...
	I1109 13:31:24.904829  554049 start.go:256] writing updated cluster config ...
	I1109 13:31:24.905185  554049 ssh_runner.go:195] Run: rm -f paused
	I1109 13:31:24.913730  554049 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 13:31:24.921584  554049 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xtt8z" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:24.930823  554049 pod_ready.go:94] pod "coredns-66bc5c9577-xtt8z" is "Ready"
	I1109 13:31:24.930856  554049 pod_ready.go:86] duration metric: took 9.238515ms for pod "coredns-66bc5c9577-xtt8z" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:24.935635  554049 pod_ready.go:83] waiting for pod "etcd-addons-640912" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:24.945855  554049 pod_ready.go:94] pod "etcd-addons-640912" is "Ready"
	I1109 13:31:24.945886  554049 pod_ready.go:86] duration metric: took 10.21877ms for pod "etcd-addons-640912" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:24.949503  554049 pod_ready.go:83] waiting for pod "kube-apiserver-addons-640912" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:24.957508  554049 pod_ready.go:94] pod "kube-apiserver-addons-640912" is "Ready"
	I1109 13:31:24.957542  554049 pod_ready.go:86] duration metric: took 7.99802ms for pod "kube-apiserver-addons-640912" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:24.967022  554049 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-640912" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:25.321696  554049 pod_ready.go:94] pod "kube-controller-manager-addons-640912" is "Ready"
	I1109 13:31:25.321729  554049 pod_ready.go:86] duration metric: took 354.672523ms for pod "kube-controller-manager-addons-640912" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:25.518911  554049 pod_ready.go:83] waiting for pod "kube-proxy-8hbf4" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:25.924834  554049 pod_ready.go:94] pod "kube-proxy-8hbf4" is "Ready"
	I1109 13:31:25.924867  554049 pod_ready.go:86] duration metric: took 405.924687ms for pod "kube-proxy-8hbf4" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:26.125658  554049 pod_ready.go:83] waiting for pod "kube-scheduler-addons-640912" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:26.520634  554049 pod_ready.go:94] pod "kube-scheduler-addons-640912" is "Ready"
	I1109 13:31:26.520674  554049 pod_ready.go:86] duration metric: took 394.982788ms for pod "kube-scheduler-addons-640912" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:26.520688  554049 pod_ready.go:40] duration metric: took 1.606902329s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 13:31:26.575762  554049 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 13:31:26.577333  554049 out.go:179] * Done! kubectl is now configured to use "addons-640912" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 13:38:15 addons-640912 crio[808]: time="2025-11-09 13:38:15.548583042Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:f516d00cd42560c02c8218477c1f740c22bdc3e04d80a67197f40634f93e478e,Verbose:false,}" file="otel-collector/interceptors.go:62" id=339d68a0-25c2-4bca-bf41-f9b42df31793 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 09 13:38:15 addons-640912 crio[808]: time="2025-11-09 13:38:15.548848634Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:f516d00cd42560c02c8218477c1f740c22bdc3e04d80a67197f40634f93e478e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1762694978453339864,StartedAt:1762694978667353377,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.6.4-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50977dcfe4ea6e6a61a3e7cf80dace1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/50977dcfe4ea6e6a61a3e7cf80dace1e/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/50977dcfe4ea6e6a61a3e7cf80dace1e/containers/etcd/04920f90,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPA
GATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-addons-640912_50977dcfe4ea6e6a61a3e7cf80dace1e/etcd/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=339d68a0-25c2-4bca-bf41-f9b42df31793 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 09 13:38:15 addons-640912 crio[808]: time="2025-11-09 13:38:15.549696057Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:7a5312ba3c9de673540c31b087b548f07f9144cbe046b292ca5260fa3a2418c7,Verbose:false,}" file="otel-collector/interceptors.go:62" id=75165382-8de0-493e-abc2-3d6078921fb8 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 09 13:38:15 addons-640912 crio[808]: time="2025-11-09 13:38:15.550121253Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:7a5312ba3c9de673540c31b087b548f07f9144cbe046b292ca5260fa3a2418c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1762694978449315323,StartedAt:1762694978590037280,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.34.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4ac15728fbe3a146e056bd33fb08144,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol
\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/b4ac15728fbe3a146e056bd33fb08144/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/b4ac15728fbe3a146e056bd33fb08144/containers/kube-scheduler/061d3a0e,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-addons-640912_b4ac15728
fbe3a146e056bd33fb08144/kube-scheduler/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=75165382-8de0-493e-abc2-3d6078921fb8 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 09 13:38:15 addons-640912 crio[808]: time="2025-11-09 13:38:15.551427627Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:b5f31d63b316b6b4fceaf2cd37baa19b18c8dd0332189d49ddc503733e9f8479,Verbose:false,}" file="otel-collector/interceptors.go:62" id=3d413809-f870-4187-a30b-225a89c04f32 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 09 13:38:15 addons-640912 crio[808]: time="2025-11-09 13:38:15.551667122Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:b5f31d63b316b6b4fceaf2cd37baa19b18c8dd0332189d49ddc503733e9f8479,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1762694978427663799,StartedAt:1762694978553660323,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.34.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40db61fb06568701553ada1b7a8540a0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\"
:\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/40db61fb06568701553ada1b7a8540a0/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/40db61fb06568701553ada1b7a8540a0/containers/kube-apiserver/4aea1936,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRel
abel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-addons-640912_40db61fb06568701553ada1b7a8540a0/kube-apiserver/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=3d413809-f870-4187-a30b-225a89c04f32 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 09 13:38:15 addons-640912 crio[808]: time="2025-11-09 13:38:15.554471449Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6aa1e2ae-b765-4420-8d25-bf9998e5d335 name=/runtime.v1.RuntimeService/Version
	Nov 09 13:38:15 addons-640912 crio[808]: time="2025-11-09 13:38:15.555078729Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6aa1e2ae-b765-4420-8d25-bf9998e5d335 name=/runtime.v1.RuntimeService/Version
	Nov 09 13:38:15 addons-640912 crio[808]: time="2025-11-09 13:38:15.557517765Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3c5c2625-2841-4fd8-92f0-0ae750c66b4b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:38:15 addons-640912 crio[808]: time="2025-11-09 13:38:15.559203206Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762695495559165281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:510745,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3c5c2625-2841-4fd8-92f0-0ae750c66b4b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:38:15 addons-640912 crio[808]: time="2025-11-09 13:38:15.560195708Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8901e83-283e-418c-be55-353c26f160ea name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:38:15 addons-640912 crio[808]: time="2025-11-09 13:38:15.560365324Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8901e83-283e-418c-be55-353c26f160ea name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:38:15 addons-640912 crio[808]: time="2025-11-09 13:38:15.561383275Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eacf871a61d3462bce510b8a7be4d66a24fdc0a53616b1f2526147ab839b5856,PodSandboxId:33d3042607b18515964bf61cd134e3070fa66eff00570ae2814c08265872d03b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1762695090213216833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 495cc12a-d51f-43be-a567-96a5b4fad03a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f006ca3c6df3652caf34bc932e6e33524e0f2033958bcc7ac29db42f159f478,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1762695080346362124,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7f113792ee1aeacca2bc95d2857de71fa6874b3ee951659efa28320a524532,PodSandboxId:1b24a0719053d6cafc5b1179a4660b7383b3ad3fee86821846956848a958c4f6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1762695078417836968,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-8j7xf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 94b07f23-7caa-4ac1-8abc-174660a2f7a4,},Annotations:map[string]string{io.kubernetes.container.hash
: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:bcb1af463fabe45efed49bfd6e02c5935505b1c97fef7131674ae2149a586b22,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6a
a2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1762695070011057395,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2db81d254d4480693b2357d5d2f213cb9d2804ed6a941205876e88393cd0017,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1762695068080434957,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb391b54abedc41ac9d3bcc30eee9c5cd3b25472ad3e6eff849b0e56456bcbc,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1762695066793520772,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88235c8c760041e04a7d8b1282c1c320e7605e8c3b3d893a904d39e3fa00cad1,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1762695064197231200,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08612da2da66df5a5111cdfafd6a0836d6f81c959759b15c0c9375639120d746,PodSandboxId:8fb95cb2623328f10eff3639cf25bd416d368d576613f64dd7bda936355116df,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1762695062447477858,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5fea9b-7c9b-4665-ac68-5e296d36729f,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dad22d6eb19cdb7a777287c38e623b79c1047aec888ff4247d50097f0dcbf9d3,PodSandboxId:98a0f80edfe46561b7f02fb9fa6e4bfcfc1f280de0c3016fa890830d94c013a4,Metadata:&ContainerMetadata{Name
:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1762695060433730190,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-pgl85,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9a227fb-a833-4bc3-928b-eacf5e94bd0f,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a5967370a8e6321b89d46ce0959546f53b9081e6af174d4ed2f04897d68e3e8,PodSandboxId:2dc442c25fff12adbc1b9b8
800fafd8bb42c57ba7ebc983153c384dd3e34bd78,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1762695060318700326,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d822fdee-fb25-4634-83b9-e9da33b6b333,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41eca9a1773d7ff95373be1bf07b6899eb28ba4b0529d7612588ab6ad1febc3,PodSandbox
Id:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1762695058637139160,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:5a862686cb4d2c0e54448fa8a8311708c3ae04778fab3490fa67801913b4a055,PodSandboxId:00a46d438634f1c852fdbf4f05d1491e38e2c51de289efbd8dd87e8acae44088,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1762695057053274602,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7kdd8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9535a584-09d0-470c-bdca-f8b70a29fe14,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e63cfc2b183fa3210c39dd451f348428dc1b2b33acc056d3b3d495017fc722,PodSandboxId:ee70cbc29ea09381c413501b957e8aa6802592bc217dfbc25edd106492661579,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1762695056922801001,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-qk9k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 548acda2-9430-4b25-a3a8-09e0a17aa95f,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e9e9c4d34d47e7d5a67e92ca9d8eb33ab0fd0ca3ffc05476e8c6342d0a0e7e,PodSandboxId:a3f82e39fba77bdb98d98684962717a0ca32dec2da0c643b2022e7041b41dcd1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1762695054467740038,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-kj7f9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a16c39e-0afd-467d-9a68-c565ad3f14d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69fe297c1b50ac2bcfff0e0f2e8357c77f8446eda32f448474242c45e7beef16,PodSandboxId:57ab048400abbbc313169bbbd7ac0890f37bd48587fbee368b27c41aff728264,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1762695034556022840,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72b9e2-abd1-49dd-b3cb-155aafc6e442,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kuber
netes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfab18621429e6c41f473ab17e6f86a03dc38ea44f3a31f18af4b9bbfc4c4874,PodSandboxId:4d885cc41b56cb4c4a8b9e39ac9027f00f4b68b9e2c742ea8006768f27f47e22,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1762695014619786271,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-2tv7p,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 0019249b-f40e-4609-b592-f9fcc146c80a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb6f2c7163352aca0e271bcb0b20322b3ac5af9c07044f0d2523de5bd207ba1,PodSandboxId:346d7ee8b9728fdb09d0b5fb01ac8c855b8a4ae22b2702c9a4b14f42f894a970,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762695002979098849,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 59b766d9-adb7-4e0c-bc68-9d9cf8fbbdba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdc72298c5067f27022c29a3fba2313bef279aa1262921b54b2abfeb1bcfd05,PodSandboxId:f734f4ea6404b3851d49fa04e9e7d50d1b5a1dd1811142e147a667e58d9e5c61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762694992805161180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xtt8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c0e27e8-3047-4a17-9435-f9
185e872696,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d0daf4cf92a3d52b6f1d92f9a1bfeeac5d7a7f31da83e4fb7df39268b3e4ea5,PodSandboxId:28544be4ccc8d274ee0f1ce693a4962e7eb2818d5bbf182e93a9e060de954441,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762694991896517056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8hbf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97813667-ffbc-4b8a-a122-3fa531d57ee3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1939a4061bbfb63f842de0036168e14c4e2c4e170bc480e3de75bc49ea789293,PodSandboxId:8461abff35ed3e424ad88b8d770b8fd6f1d6ca8979d38ab40c0900b1f9daa2aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762694978374008451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f2f0cef7cfff2538acb5ffb3152000c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f516d00cd42560c02c8218477c1f740c22bdc3e04d80a67197f40634f93e478e,PodSandboxId:9d5b3d3ae012e6b1f760709c59c955b1738d0410b8a917c420fa67cd1ad2af07,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4
cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762694978316917120,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50977dcfe4ea6e6a61a3e7cf80dace1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5312ba3c9de673540c31b087b548f07f9144cbe046b292ca5260fa3a2418c7,PodSandboxId:8cb548decbe810fcc34c2b7b41b0518f383ba44d30236dad818bfd86a008c2a8,Metadata:&ContainerMetad
ata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762694978350105060,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4ac15728fbe3a146e056bd33fb08144,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5f31d63b316b6b4fceaf2cd37baa19b18c8dd0332189d49ddc503733e9f8
479,PodSandboxId:82cda88284e7044d38c14febb7d1ee51977ecbd36b41815ac224232e7fb2f5bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762694978329597383,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40db61fb06568701553ada1b7a8540a0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c8901e83-283e-418c-be55-353c26f160ea name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:38:15 addons-640912 crio[808]: time="2025-11-09 13:38:15.616249809Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=e69c1772-6a6d-408d-83a4-e0ebb7cdf0e7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 09 13:38:15 addons-640912 crio[808]: time="2025-11-09 13:38:15.619776394Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e4f958a93f7108f1d49282838ec9f81c37efe21a8cd9c74ea5404e4e930cd3b0,Metadata:&PodSandboxMetadata{Name:nginx,Uid:d25ea36b-0cab-4e93-a461-46fc1de68cdc,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762695136256622913,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d25ea36b-0cab-4e93-a461-46fc1de68cdc,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:32:15.929316192Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:92df61f89843be7571b8d8464f9d8ff8074b814eb46f213a2f5095b38d33bb3f,Metadata:&PodSandboxMetadata{Name:task-pv-pod,Uid:e7006701-5d88-4365-b100-377ce22b89cc,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762695134382615375,Labels:map[string]string{app: task-pv
-pod,io.kubernetes.container.name: POD,io.kubernetes.pod.name: task-pv-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e7006701-5d88-4365-b100-377ce22b89cc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:32:14.058320721Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bd39873b91332749498105de5954d990059ed5945f81f370b8d7f555bf99b1e8,Metadata:&PodSandboxMetadata{Name:test-local-path,Uid:24715673-6be0-4489-8fb3-064bda4b15c9,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762695115298506252,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 24715673-6be0-4489-8fb3-064bda4b15c9,run: test-local-path,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"run\":\"test-local-path\"},\"name\":\"test-local-path\",\"namespace\":\"
default\"},\"spec\":{\"containers\":[{\"command\":[\"sh\",\"-c\",\"echo 'local-path-provisioner' \\u003e /test/file1\"],\"image\":\"busybox:stable\",\"name\":\"busybox\",\"volumeMounts\":[{\"mountPath\":\"/test\",\"name\":\"data\"}]}],\"restartPolicy\":\"OnFailure\",\"volumes\":[{\"name\":\"data\",\"persistentVolumeClaim\":{\"claimName\":\"test-pvc\"}}]}}\n,kubernetes.io/config.seen: 2025-11-09T13:31:52.170231189Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:33d3042607b18515964bf61cd134e3070fa66eff00570ae2814c08265872d03b,Metadata:&PodSandboxMetadata{Name:busybox,Uid:495cc12a-d51f-43be-a567-96a5b4fad03a,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762695087626082733,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 495cc12a-d51f-43be-a567-96a5b4fad03a,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:31:27.303397248Z,kubern
etes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1b24a0719053d6cafc5b1179a4660b7383b3ad3fee86821846956848a958c4f6,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-675c5ddd98-8j7xf,Uid:94b07f23-7caa-4ac1-8abc-174660a2f7a4,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762695068122734772,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-8j7xf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 94b07f23-7caa-4ac1-8abc-174660a2f7a4,pod-template-hash: 675c5ddd98,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:30:03.878414093Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&PodSandboxMetadata{Name:csi-hostpathplugin-9dzzw,Uid:ab8
236e3-2fb1-49f8-8fee-3f16fc4b3ca8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762695008891793446,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,app.kubernetes.io/component: plugin,app.kubernetes.io/instance: hostpath.csi.k8s.io,app.kubernetes.io/name: csi-hostpathplugin,app.kubernetes.io/part-of: csi-driver-host-path,controller-revision-hash: bfd669d76,io.kubernetes.container.name: POD,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,kubernetes.io/minikube-addons: csi-hostpath-driver,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:30:06.287116520Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8fb95cb2623328f10eff3639cf25bd416d368d576613f64dd7bda936355116df,Metadata:&PodSandboxMetadata{Name:csi-hostpath-resizer-0,Uid:3d5fea9b-7c9b-4665-ac68-5e296d36729f,Namespace:kube-system,Attempt:0,},State:SAND
BOX_READY,CreatedAt:1762695008730289377,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,app.kubernetes.io/name: csi-hostpath-resizer,apps.kubernetes.io/pod-index: 0,controller-revision-hash: csi-hostpath-resizer-5f4978ffc6,io.kubernetes.container.name: POD,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5fea9b-7c9b-4665-ac68-5e296d36729f,kubernetes.io/minikube-addons: csi-hostpath-driver,statefulset.kubernetes.io/pod-name: csi-hostpath-resizer-0,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:30:06.737830878Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2dc442c25fff12adbc1b9b8800fafd8bb42c57ba7ebc983153c384dd3e34bd78,Metadata:&PodSandboxMetadata{Name:csi-hostpath-attacher-0,Uid:d822fdee-fb25-4634-83b9-e9da33b6b333,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762695008706395201,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,app.kubernetes.
io/name: csi-hostpath-attacher,apps.kubernetes.io/pod-index: 0,controller-revision-hash: csi-hostpath-attacher-576bccf57,io.kubernetes.container.name: POD,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d822fdee-fb25-4634-83b9-e9da33b6b333,kubernetes.io/minikube-addons: csi-hostpath-driver,statefulset.kubernetes.io/pod-name: csi-hostpath-attacher-0,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:30:05.938686333Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:98a0f80edfe46561b7f02fb9fa6e4bfcfc1f280de0c3016fa890830d94c013a4,Metadata:&PodSandboxMetadata{Name:snapshot-controller-7d9fbc56b8-pgl85,Uid:d9a227fb-a833-4bc3-928b-eacf5e94bd0f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762695008176757583,Labels:map[string]string{app: snapshot-controller,io.kubernetes.container.name: POD,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-pgl85,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: d9a227fb-a833-4bc3-928b-eacf5e94bd0f,pod-template-hash: 7d9fbc56b8,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:30:04.238171732Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:00a46d438634f1c852fdbf4f05d1491e38e2c51de289efbd8dd87e8acae44088,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-7kdd8,Uid:9535a584-09d0-470c-bdca-f8b70a29fe14,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1762695007272739387,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: 3e687653-c047-4821-8acc-4e4021c03ca0,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: 3e687653-c047-4821-8acc-4e4021c03ca0,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-7kdd8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9535a584-09d0-470
c-bdca-f8b70a29fe14,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:30:04.066193953Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ee70cbc29ea09381c413501b957e8aa6802592bc217dfbc25edd106492661579,Metadata:&PodSandboxMetadata{Name:snapshot-controller-7d9fbc56b8-qk9k2,Uid:548acda2-9430-4b25-a3a8-09e0a17aa95f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762695006866269276,Labels:map[string]string{app: snapshot-controller,io.kubernetes.container.name: POD,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-qk9k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 548acda2-9430-4b25-a3a8-09e0a17aa95f,pod-template-hash: 7d9fbc56b8,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:30:04.327746027Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a3f82e39fba77bdb98d98684962717a0ca32dec2da0c643b2022e7041b41dcd1,Metadata:&PodSandboxMetadata{Name:ingress-ngin
x-admission-create-kj7f9,Uid:8a16c39e-0afd-467d-9a68-c565ad3f14d1,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1762695006748994433,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: cee558f5-e08d-4736-9d13-de38e59dd6e5,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: cee558f5-e08d-4736-9d13-de38e59dd6e5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-kj7f9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a16c39e-0afd-467d-9a68-c565ad3f14d1,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:30:04.002719270Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:57ab048400abbbc313169bbbd7ac0890f37bd48587fbee368b27c41aff728264,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:fa72
b9e2-abd1-49dd-b3cb-155aafc6e442,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762695000925625164,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72b9e2-abd1-49dd-b3cb-155aafc6e442,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\",\"imagePullPolicy\":\"
IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"hostPort\":53,\"protocol\":\"UDP\"}],\"volumeMounts\":[{\"mountPath\":\"/config\",\"name\":\"minikube-ingress-dns-config-volume\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\",\"volumes\":[{\"configMap\":{\"name\":\"minikube-ingress-dns\"},\"name\":\"minikube-ingress-dns-config-volume\"}]}}\n,kubernetes.io/config.seen: 2025-11-09T13:29:59.606786800Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:346d7ee8b9728fdb09d0b5fb01ac8c855b8a4ae22b2702c9a4b14f42f894a970,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:59b766d9-adb7-4e0c-bc68-9d9cf8fbbdba,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762695000119756941,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
9b766d9-adb7-4e0c-bc68-9d9cf8fbbdba,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-11-09T13:29:59.472835950Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4d885cc41b56cb4c4a8b9e39ac9027f00f4b68b9e2c742ea8006768f27f47e22,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plug
in-2tv7p,Uid:0019249b-f40e-4609-b592-f9fcc146c80a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762694995996015220,Labels:map[string]string{controller-revision-hash: 7f87d6fd8d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-2tv7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0019249b-f40e-4609-b592-f9fcc146c80a,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:29:55.614180138Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:28544be4ccc8d274ee0f1ce693a4962e7eb2818d5bbf182e93a9e060de954441,Metadata:&PodSandboxMetadata{Name:kube-proxy-8hbf4,Uid:97813667-ffbc-4b8a-a122-3fa531d57ee3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762694991378519497,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8hbf4,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 97813667-ffbc-4b8a-a122-3fa531d57ee3,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:29:50.411348659Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f734f4ea6404b3851d49fa04e9e7d50d1b5a1dd1811142e147a667e58d9e5c61,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-xtt8z,Uid:4c0e27e8-3047-4a17-9435-f9185e872696,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762694991182710649,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-xtt8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c0e27e8-3047-4a17-9435-f9185e872696,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:29:50.771255596Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8461abff35ed3e424ad88b8d770b8fd6f1d6ca8979d38ab40c0900b1f9daa2aa,Metadata
:&PodSandboxMetadata{Name:kube-controller-manager-addons-640912,Uid:7f2f0cef7cfff2538acb5ffb3152000c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762694978028080068,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f2f0cef7cfff2538acb5ffb3152000c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7f2f0cef7cfff2538acb5ffb3152000c,kubernetes.io/config.seen: 2025-11-09T13:29:37.209688553Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8cb548decbe810fcc34c2b7b41b0518f383ba44d30236dad818bfd86a008c2a8,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-640912,Uid:b4ac15728fbe3a146e056bd33fb08144,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762694978010277062,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes
.pod.name: kube-scheduler-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4ac15728fbe3a146e056bd33fb08144,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b4ac15728fbe3a146e056bd33fb08144,kubernetes.io/config.seen: 2025-11-09T13:29:37.209689459Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:82cda88284e7044d38c14febb7d1ee51977ecbd36b41815ac224232e7fb2f5bf,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-640912,Uid:40db61fb06568701553ada1b7a8540a0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762694978008697408,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40db61fb06568701553ada1b7a8540a0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.228:8443,kubernetes.io/config.hash:
40db61fb06568701553ada1b7a8540a0,kubernetes.io/config.seen: 2025-11-09T13:29:37.209687524Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9d5b3d3ae012e6b1f760709c59c955b1738d0410b8a917c420fa67cd1ad2af07,Metadata:&PodSandboxMetadata{Name:etcd-addons-640912,Uid:50977dcfe4ea6e6a61a3e7cf80dace1e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762694978008080646,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50977dcfe4ea6e6a61a3e7cf80dace1e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.228:2379,kubernetes.io/config.hash: 50977dcfe4ea6e6a61a3e7cf80dace1e,kubernetes.io/config.seen: 2025-11-09T13:29:37.209681917Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=e69c1772-6a6d-408d-83a4-e0ebb7cdf0e7 name=/runtime.v1.Runti
meService/ListPodSandbox
	Nov 09 13:38:15 addons-640912 crio[808]: time="2025-11-09 13:38:15.623715012Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=befb24bf-dd86-441a-8777-edcea5563a6a name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:38:15 addons-640912 crio[808]: time="2025-11-09 13:38:15.623839588Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=befb24bf-dd86-441a-8777-edcea5563a6a name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:38:15 addons-640912 crio[808]: time="2025-11-09 13:38:15.624555336Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eacf871a61d3462bce510b8a7be4d66a24fdc0a53616b1f2526147ab839b5856,PodSandboxId:33d3042607b18515964bf61cd134e3070fa66eff00570ae2814c08265872d03b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1762695090213216833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 495cc12a-d51f-43be-a567-96a5b4fad03a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f006ca3c6df3652caf34bc932e6e33524e0f2033958bcc7ac29db42f159f478,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1762695080346362124,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7f113792ee1aeacca2bc95d2857de71fa6874b3ee951659efa28320a524532,PodSandboxId:1b24a0719053d6cafc5b1179a4660b7383b3ad3fee86821846956848a958c4f6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1762695078417836968,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-8j7xf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 94b07f23-7caa-4ac1-8abc-174660a2f7a4,},Annotations:map[string]string{io.kubernetes.container.hash
: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:bcb1af463fabe45efed49bfd6e02c5935505b1c97fef7131674ae2149a586b22,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6a
a2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1762695070011057395,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2db81d254d4480693b2357d5d2f213cb9d2804ed6a941205876e88393cd0017,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1762695068080434957,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb391b54abedc41ac9d3bcc30eee9c5cd3b25472ad3e6eff849b0e56456bcbc,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1762695066793520772,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88235c8c760041e04a7d8b1282c1c320e7605e8c3b3d893a904d39e3fa00cad1,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1762695064197231200,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08612da2da66df5a5111cdfafd6a0836d6f81c959759b15c0c9375639120d746,PodSandboxId:8fb95cb2623328f10eff3639cf25bd416d368d576613f64dd7bda936355116df,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1762695062447477858,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5fea9b-7c9b-4665-ac68-5e296d36729f,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dad22d6eb19cdb7a777287c38e623b79c1047aec888ff4247d50097f0dcbf9d3,PodSandboxId:98a0f80edfe46561b7f02fb9fa6e4bfcfc1f280de0c3016fa890830d94c013a4,Metadata:&ContainerMetadata{Name
:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1762695060433730190,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-pgl85,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9a227fb-a833-4bc3-928b-eacf5e94bd0f,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a5967370a8e6321b89d46ce0959546f53b9081e6af174d4ed2f04897d68e3e8,PodSandboxId:2dc442c25fff12adbc1b9b8
800fafd8bb42c57ba7ebc983153c384dd3e34bd78,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1762695060318700326,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d822fdee-fb25-4634-83b9-e9da33b6b333,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41eca9a1773d7ff95373be1bf07b6899eb28ba4b0529d7612588ab6ad1febc3,PodSandbox
Id:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1762695058637139160,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:5a862686cb4d2c0e54448fa8a8311708c3ae04778fab3490fa67801913b4a055,PodSandboxId:00a46d438634f1c852fdbf4f05d1491e38e2c51de289efbd8dd87e8acae44088,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1762695057053274602,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7kdd8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9535a584-09d0-470c-bdca-f8b70a29fe14,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e63cfc2b183fa3210c39dd451f348428dc1b2b33acc056d3b3d495017fc722,PodSandboxId:ee70cbc29ea09381c413501b957e8aa6802592bc217dfbc25edd106492661579,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1762695056922801001,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-qk9k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 548acda2-9430-4b25-a3a8-09e0a17aa95f,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e9e9c4d34d47e7d5a67e92ca9d8eb33ab0fd0ca3ffc05476e8c6342d0a0e7e,PodSandboxId:a3f82e39fba77bdb98d98684962717a0ca32dec2da0c643b2022e7041b41dcd1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1762695054467740038,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-kj7f9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a16c39e-0afd-467d-9a68-c565ad3f14d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69fe297c1b50ac2bcfff0e0f2e8357c77f8446eda32f448474242c45e7beef16,PodSandboxId:57ab048400abbbc313169bbbd7ac0890f37bd48587fbee368b27c41aff728264,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1762695034556022840,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72b9e2-abd1-49dd-b3cb-155aafc6e442,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kuber
netes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfab18621429e6c41f473ab17e6f86a03dc38ea44f3a31f18af4b9bbfc4c4874,PodSandboxId:4d885cc41b56cb4c4a8b9e39ac9027f00f4b68b9e2c742ea8006768f27f47e22,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1762695014619786271,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-2tv7p,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 0019249b-f40e-4609-b592-f9fcc146c80a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb6f2c7163352aca0e271bcb0b20322b3ac5af9c07044f0d2523de5bd207ba1,PodSandboxId:346d7ee8b9728fdb09d0b5fb01ac8c855b8a4ae22b2702c9a4b14f42f894a970,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762695002979098849,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 59b766d9-adb7-4e0c-bc68-9d9cf8fbbdba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdc72298c5067f27022c29a3fba2313bef279aa1262921b54b2abfeb1bcfd05,PodSandboxId:f734f4ea6404b3851d49fa04e9e7d50d1b5a1dd1811142e147a667e58d9e5c61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762694992805161180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xtt8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c0e27e8-3047-4a17-9435-f9
185e872696,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d0daf4cf92a3d52b6f1d92f9a1bfeeac5d7a7f31da83e4fb7df39268b3e4ea5,PodSandboxId:28544be4ccc8d274ee0f1ce693a4962e7eb2818d5bbf182e93a9e060de954441,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762694991896517056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8hbf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97813667-ffbc-4b8a-a122-3fa531d57ee3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1939a4061bbfb63f842de0036168e14c4e2c4e170bc480e3de75bc49ea789293,PodSandboxId:8461abff35ed3e424ad88b8d770b8fd6f1d6ca8979d38ab40c0900b1f9daa2aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762694978374008451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f2f0cef7cfff2538acb5ffb3152000c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f516d00cd42560c02c8218477c1f740c22bdc3e04d80a67197f40634f93e478e,PodSandboxId:9d5b3d3ae012e6b1f760709c59c955b1738d0410b8a917c420fa67cd1ad2af07,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4
cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762694978316917120,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50977dcfe4ea6e6a61a3e7cf80dace1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5312ba3c9de673540c31b087b548f07f9144cbe046b292ca5260fa3a2418c7,PodSandboxId:8cb548decbe810fcc34c2b7b41b0518f383ba44d30236dad818bfd86a008c2a8,Metadata:&ContainerMetad
ata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762694978350105060,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4ac15728fbe3a146e056bd33fb08144,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5f31d63b316b6b4fceaf2cd37baa19b18c8dd0332189d49ddc503733e9f8
479,PodSandboxId:82cda88284e7044d38c14febb7d1ee51977ecbd36b41815ac224232e7fb2f5bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762694978329597383,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40db61fb06568701553ada1b7a8540a0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=befb24bf-dd86-441a-8777-edcea5563a6a name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:38:15 addons-640912 crio[808]: time="2025-11-09 13:38:15.628785136Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8c896f1d-d0d0-48f8-8df2-135fa022f4a3 name=/runtime.v1.RuntimeService/Version
	Nov 09 13:38:15 addons-640912 crio[808]: time="2025-11-09 13:38:15.629028384Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c896f1d-d0d0-48f8-8df2-135fa022f4a3 name=/runtime.v1.RuntimeService/Version
	Nov 09 13:38:15 addons-640912 crio[808]: time="2025-11-09 13:38:15.633650504Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9c05f30b-6ec4-4c53-a330-1eff366dccdb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:38:15 addons-640912 crio[808]: time="2025-11-09 13:38:15.635417803Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762695495635375449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:510745,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c05f30b-6ec4-4c53-a330-1eff366dccdb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:38:15 addons-640912 crio[808]: time="2025-11-09 13:38:15.636568363Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de97fd77-e920-4f3d-b52c-40c8d86c2037 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:38:15 addons-640912 crio[808]: time="2025-11-09 13:38:15.636646759Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de97fd77-e920-4f3d-b52c-40c8d86c2037 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:38:15 addons-640912 crio[808]: time="2025-11-09 13:38:15.637368009Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eacf871a61d3462bce510b8a7be4d66a24fdc0a53616b1f2526147ab839b5856,PodSandboxId:33d3042607b18515964bf61cd134e3070fa66eff00570ae2814c08265872d03b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1762695090213216833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 495cc12a-d51f-43be-a567-96a5b4fad03a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f006ca3c6df3652caf34bc932e6e33524e0f2033958bcc7ac29db42f159f478,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1762695080346362124,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7f113792ee1aeacca2bc95d2857de71fa6874b3ee951659efa28320a524532,PodSandboxId:1b24a0719053d6cafc5b1179a4660b7383b3ad3fee86821846956848a958c4f6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1762695078417836968,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-8j7xf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 94b07f23-7caa-4ac1-8abc-174660a2f7a4,},Annotations:map[string]string{io.kubernetes.container.hash
: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:bcb1af463fabe45efed49bfd6e02c5935505b1c97fef7131674ae2149a586b22,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6a
a2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1762695070011057395,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2db81d254d4480693b2357d5d2f213cb9d2804ed6a941205876e88393cd0017,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1762695068080434957,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb391b54abedc41ac9d3bcc30eee9c5cd3b25472ad3e6eff849b0e56456bcbc,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1762695066793520772,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88235c8c760041e04a7d8b1282c1c320e7605e8c3b3d893a904d39e3fa00cad1,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1762695064197231200,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08612da2da66df5a5111cdfafd6a0836d6f81c959759b15c0c9375639120d746,PodSandboxId:8fb95cb2623328f10eff3639cf25bd416d368d576613f64dd7bda936355116df,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1762695062447477858,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5fea9b-7c9b-4665-ac68-5e296d36729f,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dad22d6eb19cdb7a777287c38e623b79c1047aec888ff4247d50097f0dcbf9d3,PodSandboxId:98a0f80edfe46561b7f02fb9fa6e4bfcfc1f280de0c3016fa890830d94c013a4,Metadata:&ContainerMetadata{Name
:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1762695060433730190,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-pgl85,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9a227fb-a833-4bc3-928b-eacf5e94bd0f,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a5967370a8e6321b89d46ce0959546f53b9081e6af174d4ed2f04897d68e3e8,PodSandboxId:2dc442c25fff12adbc1b9b8
800fafd8bb42c57ba7ebc983153c384dd3e34bd78,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1762695060318700326,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d822fdee-fb25-4634-83b9-e9da33b6b333,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41eca9a1773d7ff95373be1bf07b6899eb28ba4b0529d7612588ab6ad1febc3,PodSandbox
Id:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1762695058637139160,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:5a862686cb4d2c0e54448fa8a8311708c3ae04778fab3490fa67801913b4a055,PodSandboxId:00a46d438634f1c852fdbf4f05d1491e38e2c51de289efbd8dd87e8acae44088,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1762695057053274602,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7kdd8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9535a584-09d0-470c-bdca-f8b70a29fe14,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e63cfc2b183fa3210c39dd451f348428dc1b2b33acc056d3b3d495017fc722,PodSandboxId:ee70cbc29ea09381c413501b957e8aa6802592bc217dfbc25edd106492661579,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1762695056922801001,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-qk9k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 548acda2-9430-4b25-a3a8-09e0a17aa95f,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e9e9c4d34d47e7d5a67e92ca9d8eb33ab0fd0ca3ffc05476e8c6342d0a0e7e,PodSandboxId:a3f82e39fba77bdb98d98684962717a0ca32dec2da0c643b2022e7041b41dcd1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1762695054467740038,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-kj7f9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a16c39e-0afd-467d-9a68-c565ad3f14d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69fe297c1b50ac2bcfff0e0f2e8357c77f8446eda32f448474242c45e7beef16,PodSandboxId:57ab048400abbbc313169bbbd7ac0890f37bd48587fbee368b27c41aff728264,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1762695034556022840,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72b9e2-abd1-49dd-b3cb-155aafc6e442,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kuber
netes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfab18621429e6c41f473ab17e6f86a03dc38ea44f3a31f18af4b9bbfc4c4874,PodSandboxId:4d885cc41b56cb4c4a8b9e39ac9027f00f4b68b9e2c742ea8006768f27f47e22,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1762695014619786271,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-2tv7p,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 0019249b-f40e-4609-b592-f9fcc146c80a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb6f2c7163352aca0e271bcb0b20322b3ac5af9c07044f0d2523de5bd207ba1,PodSandboxId:346d7ee8b9728fdb09d0b5fb01ac8c855b8a4ae22b2702c9a4b14f42f894a970,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762695002979098849,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 59b766d9-adb7-4e0c-bc68-9d9cf8fbbdba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdc72298c5067f27022c29a3fba2313bef279aa1262921b54b2abfeb1bcfd05,PodSandboxId:f734f4ea6404b3851d49fa04e9e7d50d1b5a1dd1811142e147a667e58d9e5c61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762694992805161180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xtt8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c0e27e8-3047-4a17-9435-f9
185e872696,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d0daf4cf92a3d52b6f1d92f9a1bfeeac5d7a7f31da83e4fb7df39268b3e4ea5,PodSandboxId:28544be4ccc8d274ee0f1ce693a4962e7eb2818d5bbf182e93a9e060de954441,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762694991896517056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8hbf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97813667-ffbc-4b8a-a122-3fa531d57ee3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1939a4061bbfb63f842de0036168e14c4e2c4e170bc480e3de75bc49ea789293,PodSandboxId:8461abff35ed3e424ad88b8d770b8fd6f1d6ca8979d38ab40c0900b1f9daa2aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762694978374008451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f2f0cef7cfff2538acb5ffb3152000c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f516d00cd42560c02c8218477c1f740c22bdc3e04d80a67197f40634f93e478e,PodSandboxId:9d5b3d3ae012e6b1f760709c59c955b1738d0410b8a917c420fa67cd1ad2af07,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4
cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762694978316917120,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50977dcfe4ea6e6a61a3e7cf80dace1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5312ba3c9de673540c31b087b548f07f9144cbe046b292ca5260fa3a2418c7,PodSandboxId:8cb548decbe810fcc34c2b7b41b0518f383ba44d30236dad818bfd86a008c2a8,Metadata:&ContainerMetad
ata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762694978350105060,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4ac15728fbe3a146e056bd33fb08144,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5f31d63b316b6b4fceaf2cd37baa19b18c8dd0332189d49ddc503733e9f8
479,PodSandboxId:82cda88284e7044d38c14febb7d1ee51977ecbd36b41815ac224232e7fb2f5bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762694978329597383,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40db61fb06568701553ada1b7a8540a0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de97fd77-e920-4f3d-b52c-40c8d86c2037 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	eacf871a61d34       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   33d3042607b18       busybox
	3f006ca3c6df3       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   464368ae55533       csi-hostpathplugin-9dzzw
	6c7f113792ee1       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd                             6 minutes ago       Running             controller                               0                   1b24a0719053d       ingress-nginx-controller-675c5ddd98-8j7xf
	bcb1af463fabe       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          7 minutes ago       Running             csi-provisioner                          0                   464368ae55533       csi-hostpathplugin-9dzzw
	c2db81d254d44       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            7 minutes ago       Running             liveness-probe                           0                   464368ae55533       csi-hostpathplugin-9dzzw
	2bb391b54abed       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           7 minutes ago       Running             hostpath                                 0                   464368ae55533       csi-hostpathplugin-9dzzw
	88235c8c76004       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                7 minutes ago       Running             node-driver-registrar                    0                   464368ae55533       csi-hostpathplugin-9dzzw
	08612da2da66d       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              7 minutes ago       Running             csi-resizer                              0                   8fb95cb262332       csi-hostpath-resizer-0
	dad22d6eb19cd       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   98a0f80edfe46       snapshot-controller-7d9fbc56b8-pgl85
	5a5967370a8e6       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             7 minutes ago       Running             csi-attacher                             0                   2dc442c25fff1       csi-hostpath-attacher-0
	b41eca9a1773d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   7 minutes ago       Running             csi-external-health-monitor-controller   0                   464368ae55533       csi-hostpathplugin-9dzzw
	5a862686cb4d2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   7 minutes ago       Exited              patch                                    0                   00a46d438634f       ingress-nginx-admission-patch-7kdd8
	34e63cfc2b183       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   ee70cbc29ea09       snapshot-controller-7d9fbc56b8-qk9k2
	52e9e9c4d34d4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   7 minutes ago       Exited              create                                   0                   a3f82e39fba77       ingress-nginx-admission-create-kj7f9
	69fe297c1b50a       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               7 minutes ago       Running             minikube-ingress-dns                     0                   57ab048400abb       kube-ingress-dns-minikube
	cfab18621429e       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     8 minutes ago       Running             amd-gpu-device-plugin                    0                   4d885cc41b56c       amd-gpu-device-plugin-2tv7p
	1bb6f2c716335       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             8 minutes ago       Running             storage-provisioner                      0                   346d7ee8b9728       storage-provisioner
	ecdc72298c506       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             8 minutes ago       Running             coredns                                  0                   f734f4ea6404b       coredns-66bc5c9577-xtt8z
	4d0daf4cf92a3       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             8 minutes ago       Running             kube-proxy                               0                   28544be4ccc8d       kube-proxy-8hbf4
	1939a4061bbfb       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             8 minutes ago       Running             kube-controller-manager                  0                   8461abff35ed3       kube-controller-manager-addons-640912
	7a5312ba3c9de       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             8 minutes ago       Running             kube-scheduler                           0                   8cb548decbe81       kube-scheduler-addons-640912
	b5f31d63b316b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             8 minutes ago       Running             kube-apiserver                           0                   82cda88284e70       kube-apiserver-addons-640912
	f516d00cd4256       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             8 minutes ago       Running             etcd                                     0                   9d5b3d3ae012e       etcd-addons-640912
	
	
	==> coredns [ecdc72298c5067f27022c29a3fba2313bef279aa1262921b54b2abfeb1bcfd05] <==
	[INFO] 10.244.0.8:56749 - 57800 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000167846s
	[INFO] 10.244.0.8:56749 - 7634 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000234715s
	[INFO] 10.244.0.8:56749 - 64775 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000214438s
	[INFO] 10.244.0.8:56749 - 27735 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000222714s
	[INFO] 10.244.0.8:56749 - 4667 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000235027s
	[INFO] 10.244.0.8:56749 - 32956 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000240757s
	[INFO] 10.244.0.8:56749 - 59149 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.001351156s
	[INFO] 10.244.0.8:47223 - 42964 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000217411s
	[INFO] 10.244.0.8:47223 - 43270 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.001085953s
	[INFO] 10.244.0.8:60054 - 39280 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000101116s
	[INFO] 10.244.0.8:60054 - 39607 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000284859s
	[INFO] 10.244.0.8:45885 - 39288 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0001299s
	[INFO] 10.244.0.8:45885 - 39507 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000087106s
	[INFO] 10.244.0.8:33022 - 41004 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000143608s
	[INFO] 10.244.0.8:33022 - 41467 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000090986s
	[INFO] 10.244.0.23:41923 - 2129 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000901948s
	[INFO] 10.244.0.23:37925 - 19699 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000411247s
	[INFO] 10.244.0.23:56154 - 55757 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000225694s
	[INFO] 10.244.0.23:55144 - 14584 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000195303s
	[INFO] 10.244.0.23:43131 - 45070 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000319047s
	[INFO] 10.244.0.23:59696 - 23369 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.002225751s
	[INFO] 10.244.0.23:45065 - 55293 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001506203s
	[INFO] 10.244.0.23:47314 - 7537 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.005372558s
	[INFO] 10.244.0.28:41385 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.003092641s
	[INFO] 10.244.0.28:50820 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.001765974s
	
	
	==> describe nodes <==
	Name:               addons-640912
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-640912
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=addons-640912
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T13_29_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-640912
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-640912"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:29:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-640912
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 13:38:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 13:32:50 +0000   Sun, 09 Nov 2025 13:29:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 13:32:50 +0000   Sun, 09 Nov 2025 13:29:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 13:32:50 +0000   Sun, 09 Nov 2025 13:29:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 13:32:50 +0000   Sun, 09 Nov 2025 13:29:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.228
	  Hostname:    addons-640912
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 50c2653cdfbf41f9bef0624b1a679070
	  System UUID:                50c2653c-dfbf-41f9-bef0-624b1a679070
	  Boot ID:                    92fab23c-5b35-498d-b1ae-dc16572c1ced
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m48s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  default                     test-local-path                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m29s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-8j7xf    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         8m12s
	  kube-system                 amd-gpu-device-plugin-2tv7p                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m20s
	  kube-system                 coredns-66bc5c9577-xtt8z                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m25s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m10s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m9s
	  kube-system                 csi-hostpathplugin-9dzzw                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m9s
	  kube-system                 etcd-addons-640912                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m30s
	  kube-system                 kube-apiserver-addons-640912                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m30s
	  kube-system                 kube-controller-manager-addons-640912        200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m30s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m16s
	  kube-system                 kube-proxy-8hbf4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m25s
	  kube-system                 kube-scheduler-addons-640912                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m30s
	  kube-system                 snapshot-controller-7d9fbc56b8-pgl85         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m11s
	  kube-system                 snapshot-controller-7d9fbc56b8-qk9k2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m11s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m22s  kube-proxy       
	  Normal  Starting                 8m31s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m30s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m30s  kubelet          Node addons-640912 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m30s  kubelet          Node addons-640912 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m30s  kubelet          Node addons-640912 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m29s  kubelet          Node addons-640912 status is now: NodeReady
	  Normal  RegisteredNode           8m26s  node-controller  Node addons-640912 event: Registered Node addons-640912 in Controller
	
	
	==> dmesg <==
	[  +0.000084] kauditd_printk_skb: 207 callbacks suppressed
	[Nov 9 13:30] kauditd_printk_skb: 123 callbacks suppressed
	[  +2.597925] kauditd_printk_skb: 235 callbacks suppressed
	[  +0.573763] kauditd_printk_skb: 410 callbacks suppressed
	[  +9.105607] kauditd_printk_skb: 35 callbacks suppressed
	[  +9.999909] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.891357] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.415789] kauditd_printk_skb: 122 callbacks suppressed
	[  +4.010962] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.104992] kauditd_printk_skb: 59 callbacks suppressed
	[Nov 9 13:31] kauditd_printk_skb: 87 callbacks suppressed
	[  +2.729784] kauditd_printk_skb: 111 callbacks suppressed
	[  +0.054539] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.206294] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.614998] kauditd_printk_skb: 32 callbacks suppressed
	[  +2.051708] kauditd_printk_skb: 47 callbacks suppressed
	[  +9.781817] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.000042] kauditd_printk_skb: 22 callbacks suppressed
	[  +3.743671] kauditd_printk_skb: 109 callbacks suppressed
	[  +3.183523] kauditd_printk_skb: 109 callbacks suppressed
	[Nov 9 13:32] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.000937] kauditd_printk_skb: 30 callbacks suppressed
	[ +12.098567] kauditd_printk_skb: 94 callbacks suppressed
	[  +6.595281] kauditd_printk_skb: 80 callbacks suppressed
	[Nov 9 13:33] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [f516d00cd42560c02c8218477c1f740c22bdc3e04d80a67197f40634f93e478e] <==
	{"level":"warn","ts":"2025-11-09T13:31:15.197375Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.099856ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T13:31:15.197462Z","caller":"traceutil/trace.go:172","msg":"trace[356284258] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1132; }","duration":"132.198445ms","start":"2025-11-09T13:31:15.065252Z","end":"2025-11-09T13:31:15.197451Z","steps":["trace[356284258] 'agreement among raft nodes before linearized reading'  (duration: 128.166813ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T13:31:15.198082Z","caller":"traceutil/trace.go:172","msg":"trace[1784177865] transaction","detail":"{read_only:false; response_revision:1133; number_of_response:1; }","duration":"153.692795ms","start":"2025-11-09T13:31:15.044376Z","end":"2025-11-09T13:31:15.198069Z","steps":["trace[1784177865] 'process raft request'  (duration: 148.980587ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T13:31:22.996146Z","caller":"traceutil/trace.go:172","msg":"trace[302697611] linearizableReadLoop","detail":"{readStateIndex:1194; appliedIndex:1194; }","duration":"165.040081ms","start":"2025-11-09T13:31:22.831088Z","end":"2025-11-09T13:31:22.996128Z","steps":["trace[302697611] 'read index received'  (duration: 165.034114ms)","trace[302697611] 'applied index is now lower than readState.Index'  (duration: 5.157µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T13:31:22.996293Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"165.199351ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T13:31:22.996314Z","caller":"traceutil/trace.go:172","msg":"trace[1678018842] range","detail":"{range_begin:/registry/poddisruptionbudgets; range_end:; response_count:0; response_revision:1160; }","duration":"165.253621ms","start":"2025-11-09T13:31:22.831055Z","end":"2025-11-09T13:31:22.996309Z","steps":["trace[1678018842] 'agreement among raft nodes before linearized reading'  (duration: 165.171034ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T13:31:22.997662Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.922771ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T13:31:22.998744Z","caller":"traceutil/trace.go:172","msg":"trace[1858999846] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1161; }","duration":"103.012616ms","start":"2025-11-09T13:31:22.895717Z","end":"2025-11-09T13:31:22.998730Z","steps":["trace[1858999846] 'agreement among raft nodes before linearized reading'  (duration: 101.899265ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T13:31:22.999100Z","caller":"traceutil/trace.go:172","msg":"trace[1451434482] transaction","detail":"{read_only:false; response_revision:1161; number_of_response:1; }","duration":"249.478691ms","start":"2025-11-09T13:31:22.749609Z","end":"2025-11-09T13:31:22.999088Z","steps":["trace[1451434482] 'process raft request'  (duration: 247.857862ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T13:31:52.754938Z","caller":"traceutil/trace.go:172","msg":"trace[6026568] linearizableReadLoop","detail":"{readStateIndex:1397; appliedIndex:1397; }","duration":"236.117273ms","start":"2025-11-09T13:31:52.518730Z","end":"2025-11-09T13:31:52.754847Z","steps":["trace[6026568] 'read index received'  (duration: 236.112503ms)","trace[6026568] 'applied index is now lower than readState.Index'  (duration: 4.061µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T13:31:52.755188Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"236.415585ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2025-11-09T13:31:52.755257Z","caller":"traceutil/trace.go:172","msg":"trace[6914757] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1354; }","duration":"236.519277ms","start":"2025-11-09T13:31:52.518725Z","end":"2025-11-09T13:31:52.755244Z","steps":["trace[6914757] 'agreement among raft nodes before linearized reading'  (duration: 236.32921ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T13:31:52.755661Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"185.569325ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" limit:1 ","response":"range_response_count:1 size:3395"}
	{"level":"info","ts":"2025-11-09T13:31:52.755687Z","caller":"traceutil/trace.go:172","msg":"trace[1620442481] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:1; response_revision:1355; }","duration":"185.600716ms","start":"2025-11-09T13:31:52.570080Z","end":"2025-11-09T13:31:52.755681Z","steps":["trace[1620442481] 'agreement among raft nodes before linearized reading'  (duration: 185.518604ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T13:31:52.755923Z","caller":"traceutil/trace.go:172","msg":"trace[1200344183] transaction","detail":"{read_only:false; response_revision:1355; number_of_response:1; }","duration":"304.583393ms","start":"2025-11-09T13:31:52.451331Z","end":"2025-11-09T13:31:52.755915Z","steps":["trace[1200344183] 'process raft request'  (duration: 304.178309ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T13:31:52.756031Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-09T13:31:52.451310Z","time spent":"304.631939ms","remote":"127.0.0.1:58684","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1343 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-11-09T13:31:55.033981Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"136.520258ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T13:31:55.034081Z","caller":"traceutil/trace.go:172","msg":"trace[553597333] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1365; }","duration":"136.623206ms","start":"2025-11-09T13:31:54.897438Z","end":"2025-11-09T13:31:55.034062Z","steps":["trace[553597333] 'range keys from in-memory index tree'  (duration: 136.438838ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T13:32:01.051010Z","caller":"traceutil/trace.go:172","msg":"trace[427081115] linearizableReadLoop","detail":"{readStateIndex:1451; appliedIndex:1451; }","duration":"321.984641ms","start":"2025-11-09T13:32:00.728995Z","end":"2025-11-09T13:32:01.050980Z","steps":["trace[427081115] 'read index received'  (duration: 321.978499ms)","trace[427081115] 'applied index is now lower than readState.Index'  (duration: 5.245µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T13:32:01.051205Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"322.326861ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T13:32:01.051230Z","caller":"traceutil/trace.go:172","msg":"trace[33595075] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1404; }","duration":"322.375104ms","start":"2025-11-09T13:32:00.728848Z","end":"2025-11-09T13:32:01.051224Z","steps":["trace[33595075] 'agreement among raft nodes before linearized reading'  (duration: 322.303091ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T13:32:01.052190Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.405283ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-11-09T13:32:01.052402Z","caller":"traceutil/trace.go:172","msg":"trace[1969419880] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1405; }","duration":"217.720748ms","start":"2025-11-09T13:32:00.834666Z","end":"2025-11-09T13:32:01.052387Z","steps":["trace[1969419880] 'agreement among raft nodes before linearized reading'  (duration: 217.090716ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T13:32:01.052589Z","caller":"traceutil/trace.go:172","msg":"trace[1054216716] transaction","detail":"{read_only:false; response_revision:1405; number_of_response:1; }","duration":"365.515044ms","start":"2025-11-09T13:32:00.687065Z","end":"2025-11-09T13:32:01.052580Z","steps":["trace[1054216716] 'process raft request'  (duration: 364.182623ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T13:32:01.052693Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-09T13:32:00.687045Z","time spent":"365.59912ms","remote":"127.0.0.1:58726","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3708,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/default/cloud-spanner-emulator-6f9fcf858b-v5gk4\" mod_revision:1404 > success:<request_put:<key:\"/registry/pods/default/cloud-spanner-emulator-6f9fcf858b-v5gk4\" value_size:3638 >> failure:<request_range:<key:\"/registry/pods/default/cloud-spanner-emulator-6f9fcf858b-v5gk4\" > >"}
	
	
	==> kernel <==
	 13:38:16 up 9 min,  0 users,  load average: 0.30, 0.98, 0.79
	Linux addons-640912 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Nov  1 20:49:51 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [b5f31d63b316b6b4fceaf2cd37baa19b18c8dd0332189d49ddc503733e9f8479] <==
	E1109 13:30:47.751126       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.99.89:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.99.89:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.99.89:443: connect: connection refused" logger="UnhandledError"
	E1109 13:30:47.752177       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.99.89:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.99.89:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.99.89:443: connect: connection refused" logger="UnhandledError"
	E1109 13:30:47.759123       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.99.89:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.99.89:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.99.89:443: connect: connection refused" logger="UnhandledError"
	W1109 13:30:48.744109       1 handler_proxy.go:99] no RequestInfo found in the context
	W1109 13:30:48.744125       1 handler_proxy.go:99] no RequestInfo found in the context
	E1109 13:30:48.744161       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1109 13:30:48.744175       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1109 13:30:48.744183       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1109 13:30:48.745368       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1109 13:30:52.828573       1 handler_proxy.go:99] no RequestInfo found in the context
	E1109 13:30:52.828619       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1109 13:30:52.832035       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.99.89:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.99.89:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1109 13:30:52.900702       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1109 13:30:52.919741       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1109 13:31:36.567406       1 conn.go:339] Error on socket receive: read tcp 192.168.39.228:8443->192.168.39.1:33552: use of closed network connection
	I1109 13:31:47.188509       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.149.219"}
	I1109 13:32:15.730439       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1109 13:32:15.992589       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.68.202"}
	I1109 13:32:53.848728       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [1939a4061bbfb63f842de0036168e14c4e2c4e170bc480e3de75bc49ea789293] <==
	I1109 13:29:49.612590       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-640912"
	I1109 13:29:49.612991       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1109 13:29:49.612713       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 13:29:49.613290       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 13:29:49.613297       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1109 13:29:49.613360       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1109 13:29:49.612798       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1109 13:29:49.616256       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1109 13:29:49.616606       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1109 13:29:49.618294       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1109 13:29:49.620056       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1109 13:29:49.620089       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	E1109 13:30:19.541345       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1109 13:30:19.542179       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1109 13:30:19.542277       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1109 13:30:19.638797       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1109 13:30:19.644003       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 13:30:19.657628       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1109 13:30:19.758998       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1109 13:30:49.652808       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1109 13:30:49.771791       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1109 13:31:50.564758       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I1109 13:32:14.119991       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I1109 13:32:16.526063       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I1109 13:32:49.285352       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	
	
	==> kube-proxy [4d0daf4cf92a3d52b6f1d92f9a1bfeeac5d7a7f31da83e4fb7df39268b3e4ea5] <==
	I1109 13:29:52.980421       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 13:29:53.082837       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 13:29:53.086021       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.228"]
	E1109 13:29:53.086130       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 13:29:53.751653       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1109 13:29:53.751799       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1109 13:29:53.751834       1 server_linux.go:132] "Using iptables Proxier"
	I1109 13:29:53.834205       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 13:29:53.836618       1 server.go:527] "Version info" version="v1.34.1"
	I1109 13:29:53.836664       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 13:29:53.846430       1 config.go:200] "Starting service config controller"
	I1109 13:29:53.846481       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 13:29:53.846506       1 config.go:106] "Starting endpoint slice config controller"
	I1109 13:29:53.846510       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 13:29:53.846520       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 13:29:53.846523       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 13:29:53.874452       1 config.go:309] "Starting node config controller"
	I1109 13:29:53.874500       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 13:29:53.874508       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 13:29:53.947795       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 13:29:53.947900       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 13:29:53.947945       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7a5312ba3c9de673540c31b087b548f07f9144cbe046b292ca5260fa3a2418c7] <==
	E1109 13:29:41.646013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 13:29:41.646142       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 13:29:41.646741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 13:29:41.647690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 13:29:41.647977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 13:29:41.648034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 13:29:42.450718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 13:29:42.531090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1109 13:29:42.551808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 13:29:42.573030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1109 13:29:42.613834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1109 13:29:42.617089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1109 13:29:42.636745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 13:29:42.745262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1109 13:29:42.747084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 13:29:42.809366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 13:29:42.869592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 13:29:42.934044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 13:29:42.941621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1109 13:29:42.985001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 13:29:43.033735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 13:29:43.088695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 13:29:43.123724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1109 13:29:43.146070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1109 13:29:44.634226       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 13:37:25 addons-640912 kubelet[1496]: E1109 13:37:25.660496    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762695445659765789  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:37:29 addons-640912 kubelet[1496]: E1109 13:37:29.575121    1496 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Nov 09 13:37:29 addons-640912 kubelet[1496]: E1109 13:37:29.575197    1496 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Nov 09 13:37:29 addons-640912 kubelet[1496]: E1109 13:37:29.575956    1496 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(d25ea36b-0cab-4e93-a461-46fc1de68cdc): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 09 13:37:29 addons-640912 kubelet[1496]: E1109 13:37:29.576027    1496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="d25ea36b-0cab-4e93-a461-46fc1de68cdc"
	Nov 09 13:37:35 addons-640912 kubelet[1496]: E1109 13:37:35.665829    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762695455665169289  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:37:35 addons-640912 kubelet[1496]: E1109 13:37:35.666004    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762695455665169289  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:37:42 addons-640912 kubelet[1496]: E1109 13:37:42.085645    1496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="d25ea36b-0cab-4e93-a461-46fc1de68cdc"
	Nov 09 13:37:45 addons-640912 kubelet[1496]: E1109 13:37:45.669143    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762695465668472910  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:37:45 addons-640912 kubelet[1496]: E1109 13:37:45.669174    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762695465668472910  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:37:55 addons-640912 kubelet[1496]: E1109 13:37:55.673016    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762695475672313378  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:37:55 addons-640912 kubelet[1496]: E1109 13:37:55.673108    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762695475672313378  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:37:56 addons-640912 kubelet[1496]: E1109 13:37:56.084646    1496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="d25ea36b-0cab-4e93-a461-46fc1de68cdc"
	Nov 09 13:37:59 addons-640912 kubelet[1496]: I1109 13:37:59.079763    1496 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-2tv7p" secret="" err="secret \"gcp-auth\" not found"
	Nov 09 13:37:59 addons-640912 kubelet[1496]: E1109 13:37:59.693124    1496 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Nov 09 13:37:59 addons-640912 kubelet[1496]: E1109 13:37:59.693209    1496 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Nov 09 13:37:59 addons-640912 kubelet[1496]: E1109 13:37:59.693413    1496 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod test-local-path_default(24715673-6be0-4489-8fb3-064bda4b15c9): ErrImagePull: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 09 13:37:59 addons-640912 kubelet[1496]: E1109 13:37:59.693608    1496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="24715673-6be0-4489-8fb3-064bda4b15c9"
	Nov 09 13:38:05 addons-640912 kubelet[1496]: E1109 13:38:05.676835    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762695485676221277  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:38:05 addons-640912 kubelet[1496]: E1109 13:38:05.676953    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762695485676221277  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:38:07 addons-640912 kubelet[1496]: E1109 13:38:07.082646    1496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="d25ea36b-0cab-4e93-a461-46fc1de68cdc"
	Nov 09 13:38:13 addons-640912 kubelet[1496]: E1109 13:38:13.087092    1496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="24715673-6be0-4489-8fb3-064bda4b15c9"
	Nov 09 13:38:14 addons-640912 kubelet[1496]: I1109 13:38:14.080294    1496 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 09 13:38:15 addons-640912 kubelet[1496]: E1109 13:38:15.682029    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762695495679791841  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:38:15 addons-640912 kubelet[1496]: E1109 13:38:15.682092    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762695495679791841  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	
	
	==> storage-provisioner [1bb6f2c7163352aca0e271bcb0b20322b3ac5af9c07044f0d2523de5bd207ba1] <==
	W1109 13:37:51.619490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:37:53.624450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:37:53.633141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:37:55.637776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:37:55.644801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:37:57.649599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:37:57.660004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:37:59.664269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:37:59.680740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:38:01.685100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:38:01.692675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:38:03.698217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:38:03.705447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:38:05.710617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:38:05.720233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:38:07.724375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:38:07.734393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:38:09.739151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:38:09.745758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:38:11.750836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:38:11.759463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:38:13.764629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:38:13.772608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:38:15.778047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:38:15.788620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-640912 -n addons-640912
helpers_test.go:269: (dbg) Run:  kubectl --context addons-640912 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-kj7f9 ingress-nginx-admission-patch-7kdd8
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-640912 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-kj7f9 ingress-nginx-admission-patch-7kdd8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-640912 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-kj7f9 ingress-nginx-admission-patch-7kdd8: exit status 1 (110.354516ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-640912/192.168.39.228
	Start Time:       Sun, 09 Nov 2025 13:32:15 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nxkzm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nxkzm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m2s                  default-scheduler  Successfully assigned default/nginx to addons-640912
	  Normal   Pulling    2m21s (x3 over 6m1s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     48s (x3 over 4m49s)   kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     48s (x3 over 4m49s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    10s (x5 over 4m48s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     10s (x5 over 4m48s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-640912/192.168.39.228
	Start Time:       Sun, 09 Nov 2025 13:32:14 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bmmc7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-bmmc7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m3s                  default-scheduler  Successfully assigned default/task-pv-pod to addons-640912
	  Warning  Failed     108s (x3 over 5m19s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     108s (x3 over 5m19s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    68s (x5 over 5m18s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     68s (x5 over 5m18s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    56s (x4 over 6m3s)    kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-640912/192.168.39.228
	Start Time:       Sun, 09 Nov 2025 13:31:52 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sgzjt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-sgzjt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m25s                default-scheduler  Successfully assigned default/test-local-path to addons-640912
	  Warning  Failed     5m51s                kubelet            Failed to pull image "busybox:stable": fetching target platform image selected from image index: reading manifest sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    92s (x4 over 6m21s)  kubelet            Pulling image "busybox:stable"
	  Warning  Failed     18s (x4 over 5m51s)  kubelet            Error: ErrImagePull
	  Warning  Failed     18s (x3 over 4m19s)  kubelet            Failed to pull image "busybox:stable": reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    4s (x6 over 5m51s)   kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     4s (x6 over 5m51s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-kj7f9" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-7kdd8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-640912 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-kj7f9 ingress-nginx-admission-patch-7kdd8: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-640912 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-640912 addons disable volumesnapshots --alsologtostderr -v=1: (1.024369873s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-640912 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-640912 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.434271894s)
--- FAIL: TestAddons/parallel/CSI (373.86s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (232.58s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-640912 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-640912 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640912 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640912 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640912 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640912 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640912 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640912 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-640912 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [24715673-6be0-4489-8fb3-064bda4b15c9] Pending
helpers_test.go:352: "test-local-path" [24715673-6be0-4489-8fb3-064bda4b15c9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:337: TestAddons/parallel/LocalPath: WARNING: pod list for "default" "run=test-local-path" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:962: ***** TestAddons/parallel/LocalPath: pod "run=test-local-path" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:962: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-640912 -n addons-640912
addons_test.go:962: TestAddons/parallel/LocalPath: showing logs for failed pods as of 2025-11-09 13:34:52.458959477 +0000 UTC m=+363.181433863
addons_test.go:962: (dbg) Run:  kubectl --context addons-640912 describe po test-local-path -n default
addons_test.go:962: (dbg) kubectl --context addons-640912 describe po test-local-path -n default:
Name:             test-local-path
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-640912/192.168.39.228
Start Time:       Sun, 09 Nov 2025 13:31:52 +0000
Labels:           run=test-local-path
Annotations:      <none>
Status:           Pending
IP:               10.244.0.27
IPs:
IP:  10.244.0.27
Containers:
busybox:
Container ID:  
Image:         busybox:stable
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sh
-c
echo 'local-path-provisioner' > /test/file1
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/test from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sgzjt (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
data:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  test-pvc
ReadOnly:   false
kube-api-access-sgzjt:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  3m                   default-scheduler  Successfully assigned default/test-local-path to addons-640912
Warning  Failed     2m26s                kubelet            Failed to pull image "busybox:stable": fetching target platform image selected from image index: reading manifest sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     54s (x2 over 2m26s)  kubelet            Error: ErrImagePull
Warning  Failed     54s                  kubelet            Failed to pull image "busybox:stable": reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    42s (x2 over 2m26s)  kubelet            Back-off pulling image "busybox:stable"
Warning  Failed     42s (x2 over 2m26s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    30s (x3 over 2m56s)  kubelet            Pulling image "busybox:stable"
addons_test.go:962: (dbg) Run:  kubectl --context addons-640912 logs test-local-path -n default
addons_test.go:962: (dbg) Non-zero exit: kubectl --context addons-640912 logs test-local-path -n default: exit status 1 (86.661269ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "test-local-path" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:962: kubectl --context addons-640912 logs test-local-path -n default: exit status 1
addons_test.go:963: failed waiting for test-local-path pod: run=test-local-path within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-640912 -n addons-640912
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-640912 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-640912 logs -n 25: (1.696322556s)
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │ 09 Nov 25 13:28 UTC │
	│ delete  │ -p download-only-969818                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-969818 │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │ 09 Nov 25 13:28 UTC │
	│ start   │ -o=json --download-only -p download-only-045678 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-045678 │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ delete  │ -p download-only-045678                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-045678 │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ delete  │ -p download-only-969818                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-969818 │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ delete  │ -p download-only-045678                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-045678 │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ start   │ --download-only -p binary-mirror-045777 --alsologtostderr --binary-mirror http://127.0.0.1:41935 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-045777 │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │                     │
	│ delete  │ -p binary-mirror-045777                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-045777 │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ addons  │ enable dashboard -p addons-640912                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │                     │
	│ addons  │ disable dashboard -p addons-640912                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │                     │
	│ start   │ -p addons-640912 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:31 UTC │
	│ addons  │ addons-640912 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │ 09 Nov 25 13:31 UTC │
	│ addons  │ addons-640912 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │ 09 Nov 25 13:31 UTC │
	│ addons  │ enable headlamp -p addons-640912 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │ 09 Nov 25 13:31 UTC │
	│ addons  │ addons-640912 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │ 09 Nov 25 13:31 UTC │
	│ addons  │ addons-640912 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:31 UTC │ 09 Nov 25 13:31 UTC │
	│ addons  │ addons-640912 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ addons  │ addons-640912 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ addons  │ addons-640912 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ ip      │ addons-640912 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ addons  │ addons-640912 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ addons  │ addons-640912 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-640912                                                                                                                                                                                                                                                                                                                                                                                         │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ addons  │ addons-640912 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-640912        │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 13:29:01
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 13:29:01.529521  554049 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:29:01.529783  554049 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:29:01.529806  554049 out.go:374] Setting ErrFile to fd 2...
	I1109 13:29:01.529811  554049 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:29:01.530042  554049 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
	I1109 13:29:01.530619  554049 out.go:368] Setting JSON to false
	I1109 13:29:01.531597  554049 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":69091,"bootTime":1762625851,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 13:29:01.531713  554049 start.go:143] virtualization: kvm guest
	I1109 13:29:01.533875  554049 out.go:179] * [addons-640912] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 13:29:01.535675  554049 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 13:29:01.535668  554049 notify.go:221] Checking for updates...
	I1109 13:29:01.538124  554049 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:29:01.539382  554049 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-549598/kubeconfig
	I1109 13:29:01.540720  554049 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 13:29:01.542038  554049 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 13:29:01.543437  554049 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 13:29:01.545291  554049 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:29:01.580555  554049 out.go:179] * Using the kvm2 driver based on user configuration
	I1109 13:29:01.581955  554049 start.go:309] selected driver: kvm2
	I1109 13:29:01.581991  554049 start.go:930] validating driver "kvm2" against <nil>
	I1109 13:29:01.582008  554049 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 13:29:01.582854  554049 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 13:29:01.583161  554049 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 13:29:01.583199  554049 cni.go:84] Creating CNI manager for ""
	I1109 13:29:01.583249  554049 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1109 13:29:01.583262  554049 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1109 13:29:01.583305  554049 start.go:353] cluster config:
	{Name:addons-640912 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-640912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1109 13:29:01.583400  554049 iso.go:125] acquiring lock: {Name:mk8f0547b4600c0b2d1e831f024251df19b55199 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 13:29:01.585006  554049 out.go:179] * Starting "addons-640912" primary control-plane node in "addons-640912" cluster
	I1109 13:29:01.586291  554049 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:29:01.586344  554049 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-549598/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1109 13:29:01.586355  554049 cache.go:65] Caching tarball of preloaded images
	I1109 13:29:01.586504  554049 preload.go:238] Found /home/jenkins/minikube-integration/21139-549598/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1109 13:29:01.586520  554049 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 13:29:01.586929  554049 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/config.json ...
	I1109 13:29:01.586963  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/config.json: {Name:mk64beb99f02d72e356fa001c0aedbf8dde60a9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:01.587175  554049 start.go:360] acquireMachinesLock for addons-640912: {Name:mkb2a0b4add9a99b18ce9ab72b74eb5b4fda0e0a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1109 13:29:01.587250  554049 start.go:364] duration metric: took 54.118µs to acquireMachinesLock for "addons-640912"
	I1109 13:29:01.587279  554049 start.go:93] Provisioning new machine with config: &{Name:addons-640912 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-640912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 13:29:01.587339  554049 start.go:125] createHost starting for "" (driver="kvm2")
	I1109 13:29:01.588964  554049 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1109 13:29:01.589196  554049 start.go:159] libmachine.API.Create for "addons-640912" (driver="kvm2")
	I1109 13:29:01.589238  554049 client.go:173] LocalClient.Create starting
	I1109 13:29:01.589385  554049 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem
	I1109 13:29:01.866031  554049 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem
	I1109 13:29:02.376066  554049 main.go:143] libmachine: creating domain...
	I1109 13:29:02.376091  554049 main.go:143] libmachine: creating network...
	I1109 13:29:02.377887  554049 main.go:143] libmachine: found existing default network
	I1109 13:29:02.378145  554049 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1109 13:29:02.378765  554049 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f10d50}
	I1109 13:29:02.378922  554049 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-640912</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1109 13:29:02.385552  554049 main.go:143] libmachine: creating private network mk-addons-640912 192.168.39.0/24...
	I1109 13:29:02.480263  554049 main.go:143] libmachine: private network mk-addons-640912 192.168.39.0/24 created
	I1109 13:29:02.480592  554049 main.go:143] libmachine: <network>
	  <name>mk-addons-640912</name>
	  <uuid>5093d52e-d83e-4496-8f74-950632b55811</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:c3:49:16'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1109 13:29:02.480645  554049 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912 ...
	I1109 13:29:02.480684  554049 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21139-549598/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso
	I1109 13:29:02.480700  554049 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 13:29:02.480786  554049 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21139-549598/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21139-549598/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso...
	I1109 13:29:02.790875  554049 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa...
	I1109 13:29:03.048683  554049 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/addons-640912.rawdisk...
	I1109 13:29:03.048760  554049 main.go:143] libmachine: Writing magic tar header
	I1109 13:29:03.048789  554049 main.go:143] libmachine: Writing SSH key tar header
	I1109 13:29:03.048939  554049 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912 ...
	I1109 13:29:03.049043  554049 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912
	I1109 13:29:03.049096  554049 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912 (perms=drwx------)
	I1109 13:29:03.049130  554049 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21139-549598/.minikube/machines
	I1109 13:29:03.049146  554049 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21139-549598/.minikube/machines (perms=drwxr-xr-x)
	I1109 13:29:03.049170  554049 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 13:29:03.049190  554049 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21139-549598/.minikube (perms=drwxr-xr-x)
	I1109 13:29:03.049210  554049 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21139-549598
	I1109 13:29:03.049226  554049 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21139-549598 (perms=drwxrwxr-x)
	I1109 13:29:03.049244  554049 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1109 13:29:03.049267  554049 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1109 13:29:03.049283  554049 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1109 13:29:03.049298  554049 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1109 13:29:03.049317  554049 main.go:143] libmachine: checking permissions on dir: /home
	I1109 13:29:03.049332  554049 main.go:143] libmachine: skipping /home - not owner
	I1109 13:29:03.049347  554049 main.go:143] libmachine: defining domain...
	I1109 13:29:03.051070  554049 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-640912</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/addons-640912.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-640912'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1109 13:29:03.059933  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:f8:20:b0 in network default
	I1109 13:29:03.060875  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:03.060908  554049 main.go:143] libmachine: starting domain...
	I1109 13:29:03.060914  554049 main.go:143] libmachine: ensuring networks are active...
	I1109 13:29:03.062198  554049 main.go:143] libmachine: Ensuring network default is active
	I1109 13:29:03.062950  554049 main.go:143] libmachine: Ensuring network mk-addons-640912 is active
	I1109 13:29:03.064049  554049 main.go:143] libmachine: getting domain XML...
	I1109 13:29:03.066087  554049 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-640912</name>
	  <uuid>50c2653c-dfbf-41f9-bef0-624b1a679070</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/addons-640912.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:3b:97:c4'/>
	      <source network='mk-addons-640912'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:f8:20:b0'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1109 13:29:04.518458  554049 main.go:143] libmachine: waiting for domain to start...
	I1109 13:29:04.520285  554049 main.go:143] libmachine: domain is now running
	I1109 13:29:04.520317  554049 main.go:143] libmachine: waiting for IP...
	I1109 13:29:04.521463  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:04.522572  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:04.522598  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:04.523028  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:04.523130  554049 retry.go:31] will retry after 248.555943ms: waiting for domain to come up
	I1109 13:29:04.773776  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:04.774727  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:04.774751  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:04.775169  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:04.775219  554049 retry.go:31] will retry after 253.374239ms: waiting for domain to come up
	I1109 13:29:05.030329  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:05.031648  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:05.031676  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:05.032301  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:05.032357  554049 retry.go:31] will retry after 460.991203ms: waiting for domain to come up
	I1109 13:29:05.495209  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:05.495935  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:05.495953  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:05.496394  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:05.496445  554049 retry.go:31] will retry after 488.671936ms: waiting for domain to come up
	I1109 13:29:05.987310  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:05.988315  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:05.988337  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:05.988678  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:05.988724  554049 retry.go:31] will retry after 734.270823ms: waiting for domain to come up
	I1109 13:29:06.724517  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:06.725451  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:06.725483  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:06.726091  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:06.726145  554049 retry.go:31] will retry after 813.958486ms: waiting for domain to come up
	I1109 13:29:07.541351  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:07.542188  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:07.542215  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:07.542584  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:07.542638  554049 retry.go:31] will retry after 773.028537ms: waiting for domain to come up
	I1109 13:29:08.317882  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:08.318758  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:08.318779  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:08.319182  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:08.319228  554049 retry.go:31] will retry after 902.625899ms: waiting for domain to come up
	I1109 13:29:09.223517  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:09.224270  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:09.224291  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:09.224645  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:09.224698  554049 retry.go:31] will retry after 1.447427193s: waiting for domain to come up
	I1109 13:29:10.674526  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:10.675369  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:10.675411  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:10.675832  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:10.675890  554049 retry.go:31] will retry after 1.413133453s: waiting for domain to come up
	I1109 13:29:12.090825  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:12.091679  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:12.091701  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:12.092074  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:12.092132  554049 retry.go:31] will retry after 1.812634142s: waiting for domain to come up
	I1109 13:29:13.907484  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:13.908470  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:13.908492  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:13.908953  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:13.909039  554049 retry.go:31] will retry after 3.291540475s: waiting for domain to come up
	I1109 13:29:17.202151  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:17.202984  554049 main.go:143] libmachine: no network interface addresses found for domain addons-640912 (source=lease)
	I1109 13:29:17.203006  554049 main.go:143] libmachine: trying to list again with source=arp
	I1109 13:29:17.203397  554049 main.go:143] libmachine: unable to find current IP address of domain addons-640912 in network mk-addons-640912 (interfaces detected: [])
	I1109 13:29:17.203453  554049 retry.go:31] will retry after 4.480228837s: waiting for domain to come up
	I1109 13:29:21.685736  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:21.686518  554049 main.go:143] libmachine: domain addons-640912 has current primary IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:21.686537  554049 main.go:143] libmachine: found domain IP: 192.168.39.228
	I1109 13:29:21.686546  554049 main.go:143] libmachine: reserving static IP address...
	I1109 13:29:21.687020  554049 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-640912", mac: "52:54:00:3b:97:c4", ip: "192.168.39.228"} in network mk-addons-640912
	I1109 13:29:21.917975  554049 main.go:143] libmachine: reserved static IP address 192.168.39.228 for domain addons-640912
	I1109 13:29:21.918007  554049 main.go:143] libmachine: waiting for SSH...
	I1109 13:29:21.918016  554049 main.go:143] libmachine: Getting to WaitForSSH function...
	I1109 13:29:21.923685  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:21.924701  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:21.924754  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:21.925088  554049 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:21.925387  554049 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I1109 13:29:21.925408  554049 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1109 13:29:22.046606  554049 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 13:29:22.047386  554049 main.go:143] libmachine: domain creation complete
	I1109 13:29:22.050123  554049 machine.go:94] provisionDockerMachine start ...
	I1109 13:29:22.054988  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.055715  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:22.055765  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.056311  554049 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:22.056903  554049 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I1109 13:29:22.056974  554049 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 13:29:22.180617  554049 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1109 13:29:22.180660  554049 buildroot.go:166] provisioning hostname "addons-640912"
	I1109 13:29:22.186275  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.187117  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:22.187172  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.187501  554049 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:22.187787  554049 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I1109 13:29:22.187941  554049 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-640912 && echo "addons-640912" | sudo tee /etc/hostname
	I1109 13:29:22.341110  554049 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-640912
	
	I1109 13:29:22.345909  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.346909  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:22.346961  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.347366  554049 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:22.347633  554049 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I1109 13:29:22.347656  554049 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-640912' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-640912/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-640912' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 13:29:22.484440  554049 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 13:29:22.484470  554049 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21139-549598/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-549598/.minikube}
	I1109 13:29:22.484528  554049 buildroot.go:174] setting up certificates
	I1109 13:29:22.484547  554049 provision.go:84] configureAuth start
	I1109 13:29:22.488028  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.488482  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:22.488510  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.491209  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.491676  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:22.491713  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.491896  554049 provision.go:143] copyHostCerts
	I1109 13:29:22.492005  554049 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-549598/.minikube/key.pem (1679 bytes)
	I1109 13:29:22.492184  554049 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-549598/.minikube/ca.pem (1082 bytes)
	I1109 13:29:22.492340  554049 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-549598/.minikube/cert.pem (1123 bytes)
	I1109 13:29:22.492422  554049 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-549598/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca-key.pem org=jenkins.addons-640912 san=[127.0.0.1 192.168.39.228 addons-640912 localhost minikube]
	I1109 13:29:22.673233  554049 provision.go:177] copyRemoteCerts
	I1109 13:29:22.673315  554049 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 13:29:22.676789  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.677351  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:22.677382  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.677656  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:22.784762  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 13:29:22.825830  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1109 13:29:22.864504  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 13:29:22.902700  554049 provision.go:87] duration metric: took 418.129808ms to configureAuth
	I1109 13:29:22.902746  554049 buildroot.go:189] setting minikube options for container-runtime
	I1109 13:29:22.903033  554049 config.go:182] Loaded profile config "addons-640912": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:29:22.907271  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.907853  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:22.907882  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:22.908152  554049 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:22.908394  554049 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I1109 13:29:22.908415  554049 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 13:29:23.187121  554049 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 13:29:23.187169  554049 machine.go:97] duration metric: took 1.136996743s to provisionDockerMachine
	I1109 13:29:23.187186  554049 client.go:176] duration metric: took 21.597936799s to LocalClient.Create
	I1109 13:29:23.187206  554049 start.go:167] duration metric: took 21.598018749s to libmachine.API.Create "addons-640912"
	I1109 13:29:23.187218  554049 start.go:293] postStartSetup for "addons-640912" (driver="kvm2")
	I1109 13:29:23.187233  554049 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 13:29:23.187304  554049 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 13:29:23.190951  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.191437  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:23.191471  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.191673  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:23.284957  554049 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 13:29:23.290908  554049 info.go:137] Remote host: Buildroot 2025.02
	I1109 13:29:23.290944  554049 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-549598/.minikube/addons for local assets ...
	I1109 13:29:23.291033  554049 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-549598/.minikube/files for local assets ...
	I1109 13:29:23.291059  554049 start.go:296] duration metric: took 103.83477ms for postStartSetup
	I1109 13:29:23.294496  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.294979  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:23.295007  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.295298  554049 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/config.json ...
	I1109 13:29:23.295545  554049 start.go:128] duration metric: took 21.708191701s to createHost
	I1109 13:29:23.298433  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.298897  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:23.298929  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.299160  554049 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:23.299426  554049 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I1109 13:29:23.299443  554049 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1109 13:29:23.418842  554049 main.go:143] libmachine: SSH cmd err, output: <nil>: 1762694963.374684002
	
	I1109 13:29:23.418873  554049 fix.go:216] guest clock: 1762694963.374684002
	I1109 13:29:23.418882  554049 fix.go:229] Guest: 2025-11-09 13:29:23.374684002 +0000 UTC Remote: 2025-11-09 13:29:23.295558762 +0000 UTC m=+21.824848523 (delta=79.12524ms)
	I1109 13:29:23.418901  554049 fix.go:200] guest clock delta is within tolerance: 79.12524ms
	I1109 13:29:23.418908  554049 start.go:83] releasing machines lock for "addons-640912", held for 21.831643055s
	I1109 13:29:23.422763  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.423397  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:23.423435  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.424204  554049 ssh_runner.go:195] Run: cat /version.json
	I1109 13:29:23.424308  554049 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 13:29:23.428595  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.428753  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.429413  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:23.429427  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:23.429458  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.429456  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:23.429725  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:23.430070  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:23.517677  554049 ssh_runner.go:195] Run: systemctl --version
	I1109 13:29:23.548521  554049 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 13:29:23.725456  554049 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 13:29:23.734446  554049 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 13:29:23.734539  554049 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 13:29:23.762212  554049 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1109 13:29:23.762264  554049 start.go:496] detecting cgroup driver to use...
	I1109 13:29:23.762376  554049 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 13:29:23.789312  554049 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 13:29:23.811825  554049 docker.go:218] disabling cri-docker service (if available) ...
	I1109 13:29:23.811901  554049 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 13:29:23.835122  554049 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 13:29:23.857937  554049 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 13:29:24.036028  554049 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 13:29:24.258493  554049 docker.go:234] disabling docker service ...
	I1109 13:29:24.258579  554049 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 13:29:24.279139  554049 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 13:29:24.297344  554049 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 13:29:24.474651  554049 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 13:29:24.636841  554049 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 13:29:24.655525  554049 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 13:29:24.685964  554049 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 13:29:24.686029  554049 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:24.703327  554049 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 13:29:24.703429  554049 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:24.723542  554049 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:24.741826  554049 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:24.759313  554049 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 13:29:24.777075  554049 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:24.793754  554049 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:24.819676  554049 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:24.834925  554049 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 13:29:24.851569  554049 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1109 13:29:24.851655  554049 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1109 13:29:24.880394  554049 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 13:29:24.896630  554049 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:29:25.060770  554049 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 13:29:25.189371  554049 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 13:29:25.189516  554049 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 13:29:25.198545  554049 start.go:564] Will wait 60s for crictl version
	I1109 13:29:25.198660  554049 ssh_runner.go:195] Run: which crictl
	I1109 13:29:25.205416  554049 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1109 13:29:25.257021  554049 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1109 13:29:25.257128  554049 ssh_runner.go:195] Run: crio --version
	I1109 13:29:25.294847  554049 ssh_runner.go:195] Run: crio --version
	I1109 13:29:25.335910  554049 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1109 13:29:25.340895  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:25.341471  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:25.341501  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:25.341823  554049 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1109 13:29:25.348236  554049 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:29:25.368735  554049 kubeadm.go:884] updating cluster {Name:addons-640912 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-640912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 13:29:25.368898  554049 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:29:25.368946  554049 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:29:25.416200  554049 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1109 13:29:25.416292  554049 ssh_runner.go:195] Run: which lz4
	I1109 13:29:25.422425  554049 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1109 13:29:25.429188  554049 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1109 13:29:25.429238  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1109 13:29:27.484253  554049 crio.go:462] duration metric: took 2.061869484s to copy over tarball
	I1109 13:29:27.484374  554049 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1109 13:29:29.665537  554049 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.181121034s)
	I1109 13:29:29.665572  554049 crio.go:469] duration metric: took 2.181275636s to extract the tarball
	I1109 13:29:29.665583  554049 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1109 13:29:29.711172  554049 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:29:29.767518  554049 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 13:29:29.767551  554049 cache_images.go:86] Images are preloaded, skipping loading
	I1109 13:29:29.767560  554049 kubeadm.go:935] updating node { 192.168.39.228 8443 v1.34.1 crio true true} ...
	I1109 13:29:29.767658  554049 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-640912 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-640912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 13:29:29.767738  554049 ssh_runner.go:195] Run: crio config
	I1109 13:29:29.827752  554049 cni.go:84] Creating CNI manager for ""
	I1109 13:29:29.827802  554049 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1109 13:29:29.827828  554049 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 13:29:29.827856  554049 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.228 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-640912 NodeName:addons-640912 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 13:29:29.828036  554049 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-640912"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.228"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.228"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 13:29:29.828128  554049 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 13:29:29.842993  554049 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 13:29:29.843074  554049 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 13:29:29.857721  554049 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1109 13:29:29.885131  554049 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 13:29:29.910962  554049 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1109 13:29:29.937168  554049 ssh_runner.go:195] Run: grep 192.168.39.228	control-plane.minikube.internal$ /etc/hosts
	I1109 13:29:29.942897  554049 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:29:29.961825  554049 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:29:30.136776  554049 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:29:30.177152  554049 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912 for IP: 192.168.39.228
	I1109 13:29:30.177200  554049 certs.go:195] generating shared ca certs ...
	I1109 13:29:30.177243  554049 certs.go:227] acquiring lock for ca certs: {Name:mkc766226c1ec8ac0cc61519ae61374bb0aa3b29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:30.177612  554049 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key
	I1109 13:29:30.526469  554049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt ...
	I1109 13:29:30.526517  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt: {Name:mk1e1ec152f9e7533279dd061df1b855d91797d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:30.526783  554049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key ...
	I1109 13:29:30.526817  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key: {Name:mkb474930e06e0f2d9550b3e47f06fa0412d8c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:30.526988  554049 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key
	I1109 13:29:31.103187  554049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.crt ...
	I1109 13:29:31.103229  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.crt: {Name:mkeee8761eaad8a6feacfb3f1772dbd1f57cdfd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:31.103462  554049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key ...
	I1109 13:29:31.103479  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key: {Name:mke1379b4418067ce1a11d365cf664bfd6b63fb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:31.103597  554049 certs.go:257] generating profile certs ...
	I1109 13:29:31.103681  554049 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.key
	I1109 13:29:31.103717  554049 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt with IP's: []
	I1109 13:29:31.393629  554049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt ...
	I1109 13:29:31.393668  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: {Name:mk1aadbe63d88684ddb1deb4c7d25f36cf84bd13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:31.393894  554049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.key ...
	I1109 13:29:31.393913  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.key: {Name:mkb1582a5c9747ad241e1432ddae43398ee47c0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:31.393997  554049 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.key.1735871a
	I1109 13:29:31.394017  554049 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.crt.1735871a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.228]
	I1109 13:29:31.559744  554049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.crt.1735871a ...
	I1109 13:29:31.559782  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.crt.1735871a: {Name:mk1cf5afc9dcb9c29b6fdbc1d8dbda4b8a0ad1cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:31.560029  554049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.key.1735871a ...
	I1109 13:29:31.560045  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.key.1735871a: {Name:mk6eb23d524b1cf83b979febf555dc8a2670dd3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:31.560132  554049 certs.go:382] copying /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.crt.1735871a -> /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.crt
	I1109 13:29:31.560210  554049 certs.go:386] copying /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.key.1735871a -> /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.key
	I1109 13:29:31.560259  554049 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/proxy-client.key
	I1109 13:29:31.560279  554049 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/proxy-client.crt with IP's: []
	I1109 13:29:31.942231  554049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/proxy-client.crt ...
	I1109 13:29:31.942265  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/proxy-client.crt: {Name:mk94f935952bddbb6d98595db3e977b6c297b768 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:31.942468  554049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/proxy-client.key ...
	I1109 13:29:31.942486  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/proxy-client.key: {Name:mk5d96b637c774a9f2904169fcbe11646a6b30aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:31.942692  554049 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca-key.pem (1675 bytes)
	I1109 13:29:31.942732  554049 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem (1082 bytes)
	I1109 13:29:31.942757  554049 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem (1123 bytes)
	I1109 13:29:31.942779  554049 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/key.pem (1679 bytes)
	I1109 13:29:31.943571  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 13:29:31.984717  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 13:29:32.026837  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 13:29:32.064723  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 13:29:32.101911  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1109 13:29:32.137993  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 13:29:32.174693  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 13:29:32.211419  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 13:29:32.251159  554049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 13:29:32.289056  554049 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 13:29:32.315536  554049 ssh_runner.go:195] Run: openssl version
	I1109 13:29:32.323702  554049 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 13:29:32.341220  554049 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:29:32.348539  554049 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:29:32.348611  554049 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:29:32.357914  554049 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 13:29:32.374131  554049 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 13:29:32.380778  554049 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 13:29:32.380859  554049 kubeadm.go:401] StartCluster: {Name:addons-640912 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-640912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:29:32.380939  554049 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:29:32.381031  554049 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:29:32.437303  554049 cri.go:89] found id: ""
	I1109 13:29:32.437443  554049 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 13:29:32.455469  554049 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 13:29:32.474586  554049 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 13:29:32.490883  554049 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 13:29:32.490932  554049 kubeadm.go:158] found existing configuration files:
	
	I1109 13:29:32.490984  554049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 13:29:32.507029  554049 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 13:29:32.507106  554049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 13:29:32.526992  554049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 13:29:32.542026  554049 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 13:29:32.542094  554049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 13:29:32.557462  554049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 13:29:32.571729  554049 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 13:29:32.571847  554049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 13:29:32.586445  554049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 13:29:32.600460  554049 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 13:29:32.600546  554049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 13:29:32.615463  554049 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1109 13:29:32.797507  554049 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 13:29:45.565254  554049 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1109 13:29:45.565352  554049 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 13:29:45.565451  554049 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 13:29:45.565580  554049 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 13:29:45.565676  554049 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1109 13:29:45.565749  554049 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 13:29:45.567749  554049 out.go:252]   - Generating certificates and keys ...
	I1109 13:29:45.567923  554049 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 13:29:45.568028  554049 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 13:29:45.568143  554049 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 13:29:45.568242  554049 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1109 13:29:45.568335  554049 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1109 13:29:45.568419  554049 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 13:29:45.568504  554049 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 13:29:45.568659  554049 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-640912 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	I1109 13:29:45.568749  554049 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 13:29:45.568968  554049 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-640912 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	I1109 13:29:45.569082  554049 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 13:29:45.569231  554049 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 13:29:45.569297  554049 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 13:29:45.569348  554049 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 13:29:45.569405  554049 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 13:29:45.569456  554049 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1109 13:29:45.569506  554049 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 13:29:45.569621  554049 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 13:29:45.569729  554049 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 13:29:45.569897  554049 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 13:29:45.570022  554049 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 13:29:45.571849  554049 out.go:252]   - Booting up control plane ...
	I1109 13:29:45.572019  554049 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 13:29:45.572139  554049 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 13:29:45.572306  554049 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 13:29:45.572529  554049 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 13:29:45.572725  554049 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1109 13:29:45.572929  554049 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1109 13:29:45.573081  554049 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 13:29:45.573164  554049 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 13:29:45.573346  554049 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1109 13:29:45.573511  554049 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1109 13:29:45.573612  554049 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002203832s
	I1109 13:29:45.573738  554049 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1109 13:29:45.573866  554049 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.228:8443/livez
	I1109 13:29:45.574035  554049 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1109 13:29:45.574163  554049 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1109 13:29:45.574287  554049 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.851777111s
	I1109 13:29:45.574388  554049 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.338951376s
	I1109 13:29:45.574496  554049 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.505486059s
	I1109 13:29:45.574629  554049 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 13:29:45.574811  554049 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 13:29:45.574907  554049 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 13:29:45.575162  554049 kubeadm.go:319] [mark-control-plane] Marking the node addons-640912 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 13:29:45.575281  554049 kubeadm.go:319] [bootstrap-token] Using token: law7ws.rcnk7pdq4fp4bzd0
	I1109 13:29:45.577265  554049 out.go:252]   - Configuring RBAC rules ...
	I1109 13:29:45.577435  554049 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 13:29:45.577562  554049 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 13:29:45.577700  554049 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 13:29:45.577922  554049 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 13:29:45.578078  554049 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 13:29:45.578193  554049 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 13:29:45.578331  554049 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 13:29:45.578397  554049 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1109 13:29:45.578469  554049 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1109 13:29:45.578482  554049 kubeadm.go:319] 
	I1109 13:29:45.578574  554049 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1109 13:29:45.578586  554049 kubeadm.go:319] 
	I1109 13:29:45.578689  554049 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1109 13:29:45.578704  554049 kubeadm.go:319] 
	I1109 13:29:45.578741  554049 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1109 13:29:45.578846  554049 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 13:29:45.578924  554049 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 13:29:45.578942  554049 kubeadm.go:319] 
	I1109 13:29:45.579023  554049 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1109 13:29:45.579033  554049 kubeadm.go:319] 
	I1109 13:29:45.579099  554049 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 13:29:45.579142  554049 kubeadm.go:319] 
	I1109 13:29:45.579228  554049 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1109 13:29:45.579340  554049 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 13:29:45.579492  554049 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 13:29:45.579520  554049 kubeadm.go:319] 
	I1109 13:29:45.579616  554049 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 13:29:45.579730  554049 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1109 13:29:45.579758  554049 kubeadm.go:319] 
	I1109 13:29:45.579905  554049 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token law7ws.rcnk7pdq4fp4bzd0 \
	I1109 13:29:45.580053  554049 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8c9d3f71532339ff1f74fe1baae16b7e7ca1ed75ea1b3aa4741816f874e79d71 \
	I1109 13:29:45.580099  554049 kubeadm.go:319] 	--control-plane 
	I1109 13:29:45.580109  554049 kubeadm.go:319] 
	I1109 13:29:45.580227  554049 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1109 13:29:45.580245  554049 kubeadm.go:319] 
	I1109 13:29:45.580332  554049 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token law7ws.rcnk7pdq4fp4bzd0 \
	I1109 13:29:45.580500  554049 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8c9d3f71532339ff1f74fe1baae16b7e7ca1ed75ea1b3aa4741816f874e79d71 
	I1109 13:29:45.580520  554049 cni.go:84] Creating CNI manager for ""
	I1109 13:29:45.580531  554049 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1109 13:29:45.582621  554049 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1109 13:29:45.584213  554049 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1109 13:29:45.605204  554049 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1109 13:29:45.642398  554049 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 13:29:45.642500  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:45.642500  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-640912 minikube.k8s.io/updated_at=2025_11_09T13_29_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=addons-640912 minikube.k8s.io/primary=true
	I1109 13:29:45.724991  554049 ops.go:34] apiserver oom_adj: -16
	I1109 13:29:45.860983  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:46.361977  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:46.862176  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:47.362111  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:47.861219  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:48.361843  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:48.861251  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:49.362130  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:49.861782  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:50.361731  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:50.861735  554049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:29:50.977332  554049 kubeadm.go:1114] duration metric: took 5.33492595s to wait for elevateKubeSystemPrivileges
	I1109 13:29:50.977378  554049 kubeadm.go:403] duration metric: took 18.596524599s to StartCluster
	I1109 13:29:50.977400  554049 settings.go:142] acquiring lock: {Name:mkb59fcf785d78efbba1217c69544ee37b77198f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:50.977564  554049 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-549598/kubeconfig
	I1109 13:29:50.978027  554049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/kubeconfig: {Name:mka7e7e8d5d1d87facf220110c90862a74355591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:50.978280  554049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 13:29:50.978317  554049 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 13:29:50.978398  554049 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1109 13:29:50.978554  554049 addons.go:70] Setting yakd=true in profile "addons-640912"
	I1109 13:29:50.978574  554049 config.go:182] Loaded profile config "addons-640912": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:29:50.978592  554049 addons.go:70] Setting metrics-server=true in profile "addons-640912"
	I1109 13:29:50.978604  554049 addons.go:239] Setting addon metrics-server=true in "addons-640912"
	I1109 13:29:50.978583  554049 addons.go:70] Setting inspektor-gadget=true in profile "addons-640912"
	I1109 13:29:50.978629  554049 addons.go:70] Setting ingress=true in profile "addons-640912"
	I1109 13:29:50.978638  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.978640  554049 addons.go:70] Setting ingress-dns=true in profile "addons-640912"
	I1109 13:29:50.978642  554049 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-640912"
	I1109 13:29:50.978649  554049 addons.go:239] Setting addon ingress=true in "addons-640912"
	I1109 13:29:50.978587  554049 addons.go:70] Setting default-storageclass=true in profile "addons-640912"
	I1109 13:29:50.978657  554049 addons.go:239] Setting addon ingress-dns=true in "addons-640912"
	I1109 13:29:50.978690  554049 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-640912"
	I1109 13:29:50.978715  554049 addons.go:70] Setting storage-provisioner=true in profile "addons-640912"
	I1109 13:29:50.978728  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.978738  554049 addons.go:239] Setting addon storage-provisioner=true in "addons-640912"
	I1109 13:29:50.978757  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.978772  554049 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-640912"
	I1109 13:29:50.978822  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.978632  554049 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-640912"
	I1109 13:29:50.980356  554049 addons.go:70] Setting volumesnapshots=true in profile "addons-640912"
	I1109 13:29:50.980394  554049 addons.go:239] Setting addon volumesnapshots=true in "addons-640912"
	I1109 13:29:50.980427  554049 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-640912"
	I1109 13:29:50.980439  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.980467  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.980505  554049 addons.go:70] Setting registry=true in profile "addons-640912"
	I1109 13:29:50.980528  554049 addons.go:239] Setting addon registry=true in "addons-640912"
	I1109 13:29:50.980562  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.978694  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.980786  554049 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-640912"
	I1109 13:29:50.980833  554049 addons.go:70] Setting registry-creds=true in profile "addons-640912"
	I1109 13:29:50.980872  554049 addons.go:239] Setting addon registry-creds=true in "addons-640912"
	I1109 13:29:50.980911  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.980979  554049 addons.go:70] Setting gcp-auth=true in profile "addons-640912"
	I1109 13:29:50.981016  554049 mustload.go:66] Loading cluster: addons-640912
	I1109 13:29:50.981255  554049 config.go:182] Loaded profile config "addons-640912": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:29:50.980845  554049 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-640912"
	I1109 13:29:50.981636  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.978582  554049 addons.go:239] Setting addon yakd=true in "addons-640912"
	I1109 13:29:50.981926  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.982378  554049 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-640912"
	I1109 13:29:50.982484  554049 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-640912"
	I1109 13:29:50.978638  554049 addons.go:70] Setting cloud-spanner=true in profile "addons-640912"
	I1109 13:29:50.983108  554049 out.go:179] * Verifying Kubernetes components...
	I1109 13:29:50.983376  554049 addons.go:239] Setting addon cloud-spanner=true in "addons-640912"
	I1109 13:29:50.983433  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.978647  554049 addons.go:239] Setting addon inspektor-gadget=true in "addons-640912"
	I1109 13:29:50.983505  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.983818  554049 addons.go:70] Setting volcano=true in profile "addons-640912"
	I1109 13:29:50.983847  554049 addons.go:239] Setting addon volcano=true in "addons-640912"
	I1109 13:29:50.983888  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.985888  554049 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:29:50.988835  554049 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 13:29:50.988855  554049 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1109 13:29:50.988850  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1109 13:29:50.990206  554049 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 13:29:50.990229  554049 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1109 13:29:50.990256  554049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1109 13:29:50.990263  554049 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1109 13:29:50.990235  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 13:29:50.990626  554049 addons.go:239] Setting addon default-storageclass=true in "addons-640912"
	I1109 13:29:50.990686  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.990970  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.992322  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1109 13:29:50.992399  554049 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1109 13:29:50.992401  554049 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1109 13:29:50.992422  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1109 13:29:50.994043  554049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1109 13:29:50.994224  554049 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1109 13:29:50.994233  554049 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1109 13:29:50.994273  554049 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1109 13:29:50.994288  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1109 13:29:50.994234  554049 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1109 13:29:50.995216  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	W1109 13:29:50.994377  554049 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1109 13:29:50.994419  554049 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-640912"
	I1109 13:29:50.995510  554049 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1109 13:29:50.995521  554049 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1109 13:29:50.995518  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:29:50.995532  554049 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1109 13:29:50.995544  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1109 13:29:50.997116  554049 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1109 13:29:50.995622  554049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1109 13:29:50.996314  554049 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1109 13:29:50.996379  554049 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1109 13:29:50.996881  554049 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 13:29:50.997509  554049 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 13:29:50.997520  554049 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1109 13:29:50.997721  554049 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1109 13:29:50.997770  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1109 13:29:50.997196  554049 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1109 13:29:50.997851  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1109 13:29:50.998125  554049 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1109 13:29:50.998204  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1109 13:29:50.998240  554049 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1109 13:29:50.998255  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1109 13:29:50.998809  554049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1109 13:29:51.000211  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1109 13:29:51.000212  554049 out.go:179]   - Using image docker.io/registry:3.0.0
	I1109 13:29:51.000409  554049 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1109 13:29:51.000431  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1109 13:29:51.001876  554049 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1109 13:29:51.001963  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1109 13:29:51.002318  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.003013  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1109 13:29:51.003111  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.004326  554049 out.go:179]   - Using image docker.io/busybox:stable
	I1109 13:29:51.004405  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.004447  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.005229  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.005291  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.005684  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.005724  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1109 13:29:51.006264  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.006874  554049 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1109 13:29:51.007777  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.008155  554049 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1109 13:29:51.008180  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1109 13:29:51.008195  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1109 13:29:51.008364  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.009951  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.009992  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.010700  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.010752  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.010781  554049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1109 13:29:51.010862  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.011032  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.011751  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.012288  554049 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1109 13:29:51.012399  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.012548  554049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1109 13:29:51.012598  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.012686  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.012719  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.013507  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.013623  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.013651  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.013707  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.014346  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.014367  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.014370  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.014490  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.014521  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.014605  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.015131  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.015563  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.015596  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.015899  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.016519  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.016594  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.016603  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.016624  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.016745  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.016827  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.016931  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.017041  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.017091  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.017453  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.017728  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.017778  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.017836  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.017934  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.017973  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.018186  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.019148  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.020575  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.021218  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.021220  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.021272  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.021523  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:51.021963  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:51.021994  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:51.022184  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	W1109 13:29:51.386846  554049 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57418->192.168.39.228:22: read: connection reset by peer
	I1109 13:29:51.386892  554049 retry.go:31] will retry after 222.983762ms: ssh: handshake failed: read tcp 192.168.39.1:57418->192.168.39.228:22: read: connection reset by peer
	W1109 13:29:51.444433  554049 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57434->192.168.39.228:22: read: connection reset by peer
	I1109 13:29:51.444480  554049 retry.go:31] will retry after 227.572873ms: ssh: handshake failed: read tcp 192.168.39.1:57434->192.168.39.228:22: read: connection reset by peer
	W1109 13:29:51.612303  554049 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57454->192.168.39.228:22: read: connection reset by peer
	I1109 13:29:51.612342  554049 retry.go:31] will retry after 211.681358ms: ssh: handshake failed: read tcp 192.168.39.1:57454->192.168.39.228:22: read: connection reset by peer
	I1109 13:29:52.010077  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 13:29:52.235070  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1109 13:29:52.331852  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 13:29:52.352738  554049 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1109 13:29:52.352773  554049 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1109 13:29:52.395388  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1109 13:29:52.440660  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1109 13:29:52.445961  554049 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1109 13:29:52.446002  554049 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1109 13:29:52.448181  554049 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1109 13:29:52.448236  554049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1109 13:29:52.544737  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1109 13:29:52.551025  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1109 13:29:52.566441  554049 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1109 13:29:52.566471  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1109 13:29:52.632342  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1109 13:29:52.729889  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1109 13:29:53.011917  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1109 13:29:53.259319  554049 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (2.280994349s)
	I1109 13:29:53.259435  554049 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (2.273499892s)
	I1109 13:29:53.259518  554049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 13:29:53.259530  554049 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:29:53.377079  554049 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1109 13:29:53.377125  554049 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1109 13:29:53.410957  554049 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1109 13:29:53.410995  554049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1109 13:29:53.492668  554049 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1109 13:29:53.492714  554049 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1109 13:29:53.541620  554049 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1109 13:29:53.541665  554049 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1109 13:29:53.652096  554049 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1109 13:29:53.652133  554049 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1109 13:29:53.995555  554049 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1109 13:29:53.995587  554049 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1109 13:29:54.033651  554049 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1109 13:29:54.033695  554049 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1109 13:29:54.067822  554049 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1109 13:29:54.067856  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1109 13:29:54.196207  554049 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1109 13:29:54.196244  554049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1109 13:29:54.227433  554049 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1109 13:29:54.227464  554049 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1109 13:29:54.679076  554049 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1109 13:29:54.679121  554049 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1109 13:29:54.696117  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1109 13:29:54.741459  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1109 13:29:54.881208  554049 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1109 13:29:54.881247  554049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1109 13:29:54.915127  554049 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1109 13:29:54.915176  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1109 13:29:55.272351  554049 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1109 13:29:55.272388  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1109 13:29:55.364173  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1109 13:29:55.383308  554049 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1109 13:29:55.383345  554049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1109 13:29:56.059938  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1109 13:29:56.248473  554049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1109 13:29:56.248504  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1109 13:29:57.014690  554049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1109 13:29:57.014726  554049 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1109 13:29:57.519712  554049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1109 13:29:57.519740  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1109 13:29:58.054597  554049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1109 13:29:58.054639  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1109 13:29:58.434364  554049 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1109 13:29:58.438873  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:58.439831  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:29:58.439910  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:29:58.440311  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:29:58.622773  554049 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1109 13:29:58.622820  554049 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1109 13:29:59.371356  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1109 13:29:59.505293  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.495157485s)
	I1109 13:29:59.785061  554049 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1109 13:30:00.392747  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (8.157615738s)
	I1109 13:30:00.392753  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.060856374s)
	I1109 13:30:00.392830  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.997405175s)
	I1109 13:30:00.392922  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.952211987s)
	I1109 13:30:00.738340  554049 addons.go:239] Setting addon gcp-auth=true in "addons-640912"
	I1109 13:30:00.738422  554049 host.go:66] Checking if "addons-640912" exists ...
	I1109 13:30:00.741137  554049 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1109 13:30:00.745233  554049 main.go:143] libmachine: domain addons-640912 has defined MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:30:00.746101  554049 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:97:c4", ip: ""} in network mk-addons-640912: {Iface:virbr1 ExpiryTime:2025-11-09 14:29:20 +0000 UTC Type:0 Mac:52:54:00:3b:97:c4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-640912 Clientid:01:52:54:00:3b:97:c4}
	I1109 13:30:00.746150  554049 main.go:143] libmachine: domain addons-640912 has defined IP address 192.168.39.228 and MAC address 52:54:00:3b:97:c4 in network mk-addons-640912
	I1109 13:30:00.746504  554049 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/addons-640912/id_rsa Username:docker}
	I1109 13:30:04.324164  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.779383152s)
	I1109 13:30:04.324212  554049 addons.go:480] Verifying addon ingress=true in "addons-640912"
	I1109 13:30:04.324312  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (11.691928557s)
	I1109 13:30:04.324279  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.773208765s)
	I1109 13:30:04.324397  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (11.594469361s)
	I1109 13:30:04.324471  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.312528576s)
	I1109 13:30:04.324546  554049 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (11.064996826s)
	I1109 13:30:04.324573  554049 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (11.065034447s)
	I1109 13:30:04.324596  554049 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1109 13:30:04.324784  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.628636294s)
	I1109 13:30:04.324825  554049 addons.go:480] Verifying addon metrics-server=true in "addons-640912"
	I1109 13:30:04.324875  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.583378818s)
	I1109 13:30:04.324892  554049 addons.go:480] Verifying addon registry=true in "addons-640912"
	I1109 13:30:04.325127  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.2651527s)
	W1109 13:30:04.325346  554049 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1109 13:30:04.325153  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.960820727s)
	I1109 13:30:04.325383  554049 retry.go:31] will retry after 202.022969ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1109 13:30:04.325661  554049 node_ready.go:35] waiting up to 6m0s for node "addons-640912" to be "Ready" ...
	I1109 13:30:04.325983  554049 out.go:179] * Verifying ingress addon...
	I1109 13:30:04.326814  554049 out.go:179] * Verifying registry addon...
	I1109 13:30:04.327420  554049 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-640912 service yakd-dashboard -n yakd-dashboard
	
	I1109 13:30:04.328195  554049 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1109 13:30:04.328903  554049 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1109 13:30:04.421113  554049 node_ready.go:49] node "addons-640912" is "Ready"
	I1109 13:30:04.421170  554049 node_ready.go:38] duration metric: took 95.473426ms for node "addons-640912" to be "Ready" ...
	I1109 13:30:04.421193  554049 api_server.go:52] waiting for apiserver process to appear ...
	I1109 13:30:04.421252  554049 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:30:04.436573  554049 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1109 13:30:04.436601  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:04.437324  554049 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1109 13:30:04.437349  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:04.527734  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1109 13:30:04.850888  554049 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-640912" context rescaled to 1 replicas
	I1109 13:30:04.887833  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:04.891917  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:05.342314  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:05.346335  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:05.863995  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:05.864036  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:06.396694  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:06.402111  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:06.554519  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.183096689s)
	I1109 13:30:06.554582  554049 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-640912"
	I1109 13:30:06.554594  554049 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.813405865s)
	I1109 13:30:06.554623  554049 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.133355636s)
	I1109 13:30:06.554651  554049 api_server.go:72] duration metric: took 15.57629663s to wait for apiserver process to appear ...
	I1109 13:30:06.554661  554049 api_server.go:88] waiting for apiserver healthz status ...
	I1109 13:30:06.554691  554049 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I1109 13:30:06.556403  554049 out.go:179] * Verifying csi-hostpath-driver addon...
	I1109 13:30:06.556401  554049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1109 13:30:06.559165  554049 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1109 13:30:06.559901  554049 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1109 13:30:06.560844  554049 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1109 13:30:06.560881  554049 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1109 13:30:06.598285  554049 api_server.go:279] https://192.168.39.228:8443/healthz returned 200:
	ok
	I1109 13:30:06.612843  554049 api_server.go:141] control plane version: v1.34.1
	I1109 13:30:06.612893  554049 api_server.go:131] duration metric: took 58.222701ms to wait for apiserver health ...
	I1109 13:30:06.612928  554049 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 13:30:06.645111  554049 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1109 13:30:06.645145  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:06.677129  554049 system_pods.go:59] 20 kube-system pods found
	I1109 13:30:06.677261  554049 system_pods.go:61] "amd-gpu-device-plugin-2tv7p" [0019249b-f40e-4609-b592-f9fcc146c80a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1109 13:30:06.677278  554049 system_pods.go:61] "coredns-66bc5c9577-s9xxb" [e5d6eb11-cd0f-4ef0-b1ae-938e4c32f04b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:30:06.677293  554049 system_pods.go:61] "coredns-66bc5c9577-xtt8z" [4c0e27e8-3047-4a17-9435-f9185e872696] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:30:06.677304  554049 system_pods.go:61] "csi-hostpath-attacher-0" [d822fdee-fb25-4634-83b9-e9da33b6b333] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1109 13:30:06.677316  554049 system_pods.go:61] "csi-hostpath-resizer-0" [3d5fea9b-7c9b-4665-ac68-5e296d36729f] Pending
	I1109 13:30:06.677326  554049 system_pods.go:61] "csi-hostpathplugin-9dzzw" [ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1109 13:30:06.677338  554049 system_pods.go:61] "etcd-addons-640912" [be48210e-3e9d-4a68-b5a4-80f4a26fa4be] Running
	I1109 13:30:06.677344  554049 system_pods.go:61] "kube-apiserver-addons-640912" [066566b1-566d-491b-8be7-e1bf16b2ecb1] Running
	I1109 13:30:06.677349  554049 system_pods.go:61] "kube-controller-manager-addons-640912" [daaf7f94-d2de-42ec-8cd7-37bae6ec43ad] Running
	I1109 13:30:06.677359  554049 system_pods.go:61] "kube-ingress-dns-minikube" [fa72b9e2-abd1-49dd-b3cb-155aafc6e442] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 13:30:06.677369  554049 system_pods.go:61] "kube-proxy-8hbf4" [97813667-ffbc-4b8a-a122-3fa531d57ee3] Running
	I1109 13:30:06.677376  554049 system_pods.go:61] "kube-scheduler-addons-640912" [051715db-03e5-4cae-9b74-60fe58511b6b] Running
	I1109 13:30:06.677387  554049 system_pods.go:61] "metrics-server-85b7d694d7-lfl7n" [d4722f0c-b942-4bf8-88e4-a8d5b09f6fdf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 13:30:06.677399  554049 system_pods.go:61] "nvidia-device-plugin-daemonset-vhlxq" [b849210d-a4dd-4bfc-ad75-3bf99c214e37] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1109 13:30:06.677407  554049 system_pods.go:61] "registry-6b586f9694-wm87r" [59b2fc4f-c09e-47b3-ac30-a3dce9d1d9b1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1109 13:30:06.677419  554049 system_pods.go:61] "registry-creds-764b6fb674-z2sqx" [8e9dea64-3610-47e1-9a4d-1f13f275439e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:30:06.677434  554049 system_pods.go:61] "registry-proxy-d6q28" [bc34c737-250f-4084-859d-d21c43a619d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1109 13:30:06.677445  554049 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pgl85" [d9a227fb-a833-4bc3-928b-eacf5e94bd0f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:06.677474  554049 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qk9k2" [548acda2-9430-4b25-a3a8-09e0a17aa95f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:06.677489  554049 system_pods.go:61] "storage-provisioner" [59b766d9-adb7-4e0c-bc68-9d9cf8fbbdba] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 13:30:06.677500  554049 system_pods.go:74] duration metric: took 64.564101ms to wait for pod list to return data ...
	I1109 13:30:06.677515  554049 default_sa.go:34] waiting for default service account to be created ...
	I1109 13:30:06.698871  554049 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1109 13:30:06.698911  554049 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1109 13:30:06.723783  554049 default_sa.go:45] found service account: "default"
	I1109 13:30:06.723870  554049 default_sa.go:55] duration metric: took 46.344804ms for default service account to be created ...
	I1109 13:30:06.723888  554049 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 13:30:06.784361  554049 system_pods.go:86] 20 kube-system pods found
	I1109 13:30:06.784424  554049 system_pods.go:89] "amd-gpu-device-plugin-2tv7p" [0019249b-f40e-4609-b592-f9fcc146c80a] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1109 13:30:06.784438  554049 system_pods.go:89] "coredns-66bc5c9577-s9xxb" [e5d6eb11-cd0f-4ef0-b1ae-938e4c32f04b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:30:06.784456  554049 system_pods.go:89] "coredns-66bc5c9577-xtt8z" [4c0e27e8-3047-4a17-9435-f9185e872696] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:30:06.784466  554049 system_pods.go:89] "csi-hostpath-attacher-0" [d822fdee-fb25-4634-83b9-e9da33b6b333] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1109 13:30:06.784474  554049 system_pods.go:89] "csi-hostpath-resizer-0" [3d5fea9b-7c9b-4665-ac68-5e296d36729f] Pending
	I1109 13:30:06.784485  554049 system_pods.go:89] "csi-hostpathplugin-9dzzw" [ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1109 13:30:06.784495  554049 system_pods.go:89] "etcd-addons-640912" [be48210e-3e9d-4a68-b5a4-80f4a26fa4be] Running
	I1109 13:30:06.784616  554049 system_pods.go:89] "kube-apiserver-addons-640912" [066566b1-566d-491b-8be7-e1bf16b2ecb1] Running
	I1109 13:30:06.784630  554049 system_pods.go:89] "kube-controller-manager-addons-640912" [daaf7f94-d2de-42ec-8cd7-37bae6ec43ad] Running
	I1109 13:30:06.784642  554049 system_pods.go:89] "kube-ingress-dns-minikube" [fa72b9e2-abd1-49dd-b3cb-155aafc6e442] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 13:30:06.784654  554049 system_pods.go:89] "kube-proxy-8hbf4" [97813667-ffbc-4b8a-a122-3fa531d57ee3] Running
	I1109 13:30:06.784663  554049 system_pods.go:89] "kube-scheduler-addons-640912" [051715db-03e5-4cae-9b74-60fe58511b6b] Running
	I1109 13:30:06.784714  554049 system_pods.go:89] "metrics-server-85b7d694d7-lfl7n" [d4722f0c-b942-4bf8-88e4-a8d5b09f6fdf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 13:30:06.784734  554049 system_pods.go:89] "nvidia-device-plugin-daemonset-vhlxq" [b849210d-a4dd-4bfc-ad75-3bf99c214e37] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1109 13:30:06.784749  554049 system_pods.go:89] "registry-6b586f9694-wm87r" [59b2fc4f-c09e-47b3-ac30-a3dce9d1d9b1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1109 13:30:06.784761  554049 system_pods.go:89] "registry-creds-764b6fb674-z2sqx" [8e9dea64-3610-47e1-9a4d-1f13f275439e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:30:06.784769  554049 system_pods.go:89] "registry-proxy-d6q28" [bc34c737-250f-4084-859d-d21c43a619d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1109 13:30:06.784779  554049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pgl85" [d9a227fb-a833-4bc3-928b-eacf5e94bd0f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:06.784787  554049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qk9k2" [548acda2-9430-4b25-a3a8-09e0a17aa95f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:30:06.784813  554049 system_pods.go:89] "storage-provisioner" [59b766d9-adb7-4e0c-bc68-9d9cf8fbbdba] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 13:30:06.784835  554049 system_pods.go:126] duration metric: took 60.936237ms to wait for k8s-apps to be running ...
	I1109 13:30:06.784852  554049 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 13:30:06.784957  554049 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 13:30:06.790756  554049 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1109 13:30:06.790815  554049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1109 13:30:06.855567  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:06.856076  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:06.996894  554049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1109 13:30:07.069630  554049 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1109 13:30:07.069669  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:07.357817  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:07.358129  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:07.585714  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:07.837469  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:07.842429  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:08.001868  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.47407204s)
	I1109 13:30:08.001935  554049 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.216918597s)
	I1109 13:30:08.001974  554049 system_svc.go:56] duration metric: took 1.217116528s WaitForService to wait for kubelet
	I1109 13:30:08.001988  554049 kubeadm.go:587] duration metric: took 17.023632052s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 13:30:08.002022  554049 node_conditions.go:102] verifying NodePressure condition ...
	I1109 13:30:08.013233  554049 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1109 13:30:08.013279  554049 node_conditions.go:123] node cpu capacity is 2
	I1109 13:30:08.013321  554049 node_conditions.go:105] duration metric: took 11.288216ms to run NodePressure ...
	I1109 13:30:08.013341  554049 start.go:242] waiting for startup goroutines ...
	I1109 13:30:08.072285  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:08.333086  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:08.336474  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:08.572226  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:08.887336  554049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.890385858s)
	I1109 13:30:08.888525  554049 addons.go:480] Verifying addon gcp-auth=true in "addons-640912"
	I1109 13:30:08.890860  554049 out.go:179] * Verifying gcp-auth addon...
	I1109 13:30:08.892713  554049 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1109 13:30:08.939244  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:08.939347  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:08.991310  554049 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1109 13:30:08.991337  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:09.098338  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:09.344858  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:09.345304  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:09.399368  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:09.570285  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:09.838385  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:09.840384  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:09.898869  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:10.065083  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:10.334202  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:10.334309  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:10.401950  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:10.569284  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:10.836515  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:10.838899  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:10.896313  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:11.067129  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:11.339416  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:11.340743  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:11.402448  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:11.566253  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:11.837985  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:11.838020  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:11.898902  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:12.066368  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:12.337501  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:12.338519  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:12.399240  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:12.571326  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:12.832263  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:12.838277  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:12.897716  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:13.073975  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:13.345785  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:13.348013  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:13.397374  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:13.564325  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:13.837325  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:13.843684  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:13.902254  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:14.068483  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:14.335320  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:14.338051  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:14.396277  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:14.565373  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:14.834165  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:14.834467  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:14.897445  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:15.064757  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:15.333021  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:15.333719  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:15.397830  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:15.566785  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:15.835560  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:15.838276  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:15.900642  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:16.067501  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:16.337462  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:16.337641  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:16.398587  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:16.566906  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:16.834191  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:16.834422  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:16.897896  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:17.066472  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:17.336985  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:17.337337  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:17.399227  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:17.565260  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:17.836508  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:17.837830  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:17.897337  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:18.065001  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:18.332999  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:18.335394  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:18.402571  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:18.564851  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:18.840456  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:18.843062  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:18.899589  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:19.068832  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:19.339870  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:19.341559  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:19.399386  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:19.587728  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:20.102869  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:20.102915  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:20.104530  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:20.104680  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:20.336692  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:20.336706  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:20.436134  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:20.563604  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:20.837295  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:20.843051  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:20.936258  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:21.065172  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:21.334067  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:21.335105  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:21.396790  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:21.564002  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:21.835247  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:21.835561  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:21.898139  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:22.070927  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:22.334447  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:22.334961  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:22.396866  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:22.567180  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:22.840032  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:22.840068  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:22.896778  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:23.071532  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:23.339919  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:23.340496  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:23.397236  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:23.566063  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:23.839678  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:23.841282  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:23.901245  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:24.071668  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:24.334636  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:24.335846  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:24.398620  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:24.567631  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:24.836032  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:24.836151  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:24.935042  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:25.065721  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:25.336610  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:25.337364  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:25.400426  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:25.566021  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:25.836480  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:25.838214  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:25.902147  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:26.071427  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:26.338573  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:26.338582  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:26.398771  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:26.565358  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:26.836720  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:26.840552  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:26.901096  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:27.067504  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:27.339750  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:27.341731  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:27.402242  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:27.569891  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:27.833392  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:27.833537  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:27.906589  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:28.065108  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:28.337155  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:28.337297  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:28.397195  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:28.566495  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:28.904921  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:28.907409  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:28.907434  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:29.072857  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:29.334467  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:29.336353  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:29.399920  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:29.566093  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:29.837017  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:29.840579  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:29.902450  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:30.065577  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:30.719201  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:30.724919  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:30.724939  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:30.724986  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:30.833958  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:30.834194  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:30.900339  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:31.065316  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:31.333087  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:31.333171  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:31.398332  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:31.564881  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:31.833924  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:31.837095  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:31.897424  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:32.069730  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:32.337945  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:32.340042  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:32.401234  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:32.567187  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:32.843640  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:32.847045  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:32.898376  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:33.069348  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:33.334614  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:33.339537  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:33.398429  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:33.566402  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:34.077754  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:34.078072  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:34.078683  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:34.079618  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:34.334588  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:34.337189  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:34.397855  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:34.572190  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:34.849654  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:34.849861  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:34.896948  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:35.074479  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:35.348183  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:35.356209  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:35.411951  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:35.570590  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:35.845515  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:35.845555  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:35.905142  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:36.071389  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:36.338701  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:36.340912  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:36.400596  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:36.568710  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:36.911585  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:36.915424  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:36.916949  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:37.067760  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:37.336355  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:37.339107  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:37.398618  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:37.569194  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:37.845063  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:37.845916  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:37.899000  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:38.067362  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:38.334562  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:38.336207  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:38.400168  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:38.571573  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:38.973809  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:38.974144  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:38.974151  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:39.068129  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:39.333195  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:39.335360  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:39.397654  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:39.564893  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:39.833320  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:39.839558  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:39.898484  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:40.065477  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:40.340552  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:40.341676  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:40.397951  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:40.568140  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:40.845076  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:40.845487  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:40.898753  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:41.071899  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:41.346589  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:41.359208  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:41.403903  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:41.571002  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:41.833974  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:41.837788  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:41.898684  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:42.069463  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:42.335582  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:42.338032  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:42.398193  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:42.565475  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:42.835535  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:42.837038  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:42.937572  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:43.073282  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:43.339090  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:43.339461  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:43.396901  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:43.586382  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:43.838864  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:43.843579  554049 kapi.go:107] duration metric: took 39.514673062s to wait for kubernetes.io/minikube-addons=registry ...
	I1109 13:30:43.905634  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:44.064934  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:44.332975  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:44.396420  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:44.571769  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:44.833998  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:44.897776  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:45.068095  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:45.344379  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:45.402752  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:45.574628  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:45.837165  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:45.899358  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:46.067886  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:46.335065  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:46.403112  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:46.577103  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:46.839115  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:46.896120  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:47.076119  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:47.350771  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:47.401893  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:47.571338  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:47.837062  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:47.896673  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:48.066817  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:48.337456  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:48.398614  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:48.565456  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:48.833611  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:48.897408  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:49.064823  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:49.335724  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:49.408948  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:49.565312  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:49.833445  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:49.898385  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:50.064095  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:50.334339  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:50.397598  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:50.569309  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:50.836332  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:50.898692  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:51.066221  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:51.480743  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:51.480846  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:51.568243  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:51.833039  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:51.933871  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:52.065619  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:52.335123  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:52.396946  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:52.566374  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:52.864538  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:52.956580  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:53.066131  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:53.340918  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:53.397918  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:53.570472  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:53.832824  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:53.899448  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:54.065472  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:54.332326  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:54.397534  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:54.568817  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:54.832947  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:54.901046  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:55.064454  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:55.335531  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:55.399529  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:55.569216  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:55.838545  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:55.905412  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:56.067458  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:56.334763  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:56.402225  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:56.768475  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:56.835262  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:56.907870  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:57.067772  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:57.339379  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:57.439713  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:57.573371  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:57.839441  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:57.908300  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:58.068309  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:58.338714  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:58.401258  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:58.565431  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:58.832874  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:58.897895  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:59.076776  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:59.332886  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:59.401336  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:59.572413  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:59.836884  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:59.935930  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:00.205382  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:00.341292  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:00.396631  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:00.568505  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:00.837424  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:00.929421  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:01.069724  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:01.335835  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:01.400290  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:01.564385  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:01.833209  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:01.898880  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:02.067659  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:02.333957  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:02.401527  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:02.573124  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:02.843273  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:02.946887  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:03.068597  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:03.336581  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:03.399764  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:03.567632  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:04.070184  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:04.071224  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:04.075196  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:04.337446  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:04.437852  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:04.566623  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:04.849898  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:04.946693  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:05.069001  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:05.335428  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:05.401410  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:05.566746  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:05.850306  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:05.910812  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:06.073522  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:06.350358  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:06.398770  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:06.570578  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:06.835835  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:06.937150  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:07.070212  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:07.342676  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:07.441121  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:07.575162  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:07.843325  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:07.898217  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:08.069896  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:08.336282  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:08.436654  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:08.572085  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:08.836872  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:08.900081  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:09.066104  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:09.331853  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:09.400057  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:09.564879  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:09.873005  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:09.897692  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:10.066725  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:10.339369  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:10.399557  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:10.572087  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:10.838743  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:10.897458  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:11.067721  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:11.335546  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:11.397389  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:11.566619  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:11.839606  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:11.902886  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:12.068399  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:12.332049  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:12.401492  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:12.565507  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:12.835898  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:12.907128  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:13.066925  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:13.338046  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:13.400870  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:13.563107  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:13.834771  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:13.937396  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:14.068487  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:14.332717  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:14.399661  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:14.570753  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:14.833332  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:14.897424  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:15.204038  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:15.339763  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:15.397926  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:15.568164  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:15.836548  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:15.899073  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:16.066864  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:16.333466  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:16.397331  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:16.567861  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:16.833409  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:16.897614  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:17.070130  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:17.337400  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:17.401374  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:17.565736  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:17.841910  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:17.898786  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:18.070624  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:18.333680  554049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:18.401244  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:18.571631  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:18.833883  554049 kapi.go:107] duration metric: took 1m14.505685559s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1109 13:31:18.898024  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:19.073709  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:19.402477  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:19.565307  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:19.904075  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:20.068726  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:20.398760  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:20.565697  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:20.896731  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:21.064644  554049 kapi.go:107] duration metric: took 1m14.504756398s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1109 13:31:21.397137  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:21.897734  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:22.398588  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:23.010336  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:23.397591  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:23.902542  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:24.399075  554049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:24.897122  554049 kapi.go:107] duration metric: took 1m16.004408046s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1109 13:31:24.898930  554049 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-640912 cluster.
	I1109 13:31:24.900363  554049 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1109 13:31:24.901752  554049 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1109 13:31:24.903272  554049 out.go:179] * Enabled addons: storage-provisioner, inspektor-gadget, nvidia-device-plugin, registry-creds, default-storageclass, amd-gpu-device-plugin, cloud-spanner, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1109 13:31:24.904711  554049 addons.go:515] duration metric: took 1m33.926303204s for enable addons: enabled=[storage-provisioner inspektor-gadget nvidia-device-plugin registry-creds default-storageclass amd-gpu-device-plugin cloud-spanner ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1109 13:31:24.904783  554049 start.go:247] waiting for cluster config update ...
	I1109 13:31:24.904829  554049 start.go:256] writing updated cluster config ...
	I1109 13:31:24.905185  554049 ssh_runner.go:195] Run: rm -f paused
	I1109 13:31:24.913730  554049 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 13:31:24.921584  554049 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xtt8z" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:24.930823  554049 pod_ready.go:94] pod "coredns-66bc5c9577-xtt8z" is "Ready"
	I1109 13:31:24.930856  554049 pod_ready.go:86] duration metric: took 9.238515ms for pod "coredns-66bc5c9577-xtt8z" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:24.935635  554049 pod_ready.go:83] waiting for pod "etcd-addons-640912" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:24.945855  554049 pod_ready.go:94] pod "etcd-addons-640912" is "Ready"
	I1109 13:31:24.945886  554049 pod_ready.go:86] duration metric: took 10.21877ms for pod "etcd-addons-640912" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:24.949503  554049 pod_ready.go:83] waiting for pod "kube-apiserver-addons-640912" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:24.957508  554049 pod_ready.go:94] pod "kube-apiserver-addons-640912" is "Ready"
	I1109 13:31:24.957542  554049 pod_ready.go:86] duration metric: took 7.99802ms for pod "kube-apiserver-addons-640912" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:24.967022  554049 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-640912" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:25.321696  554049 pod_ready.go:94] pod "kube-controller-manager-addons-640912" is "Ready"
	I1109 13:31:25.321729  554049 pod_ready.go:86] duration metric: took 354.672523ms for pod "kube-controller-manager-addons-640912" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:25.518911  554049 pod_ready.go:83] waiting for pod "kube-proxy-8hbf4" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:25.924834  554049 pod_ready.go:94] pod "kube-proxy-8hbf4" is "Ready"
	I1109 13:31:25.924867  554049 pod_ready.go:86] duration metric: took 405.924687ms for pod "kube-proxy-8hbf4" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:26.125658  554049 pod_ready.go:83] waiting for pod "kube-scheduler-addons-640912" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:26.520634  554049 pod_ready.go:94] pod "kube-scheduler-addons-640912" is "Ready"
	I1109 13:31:26.520674  554049 pod_ready.go:86] duration metric: took 394.982788ms for pod "kube-scheduler-addons-640912" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:31:26.520688  554049 pod_ready.go:40] duration metric: took 1.606902329s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 13:31:26.575762  554049 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1109 13:31:26.577333  554049 out.go:179] * Done! kubectl is now configured to use "addons-640912" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 13:34:53 addons-640912 crio[808]: time="2025-11-09 13:34:53.507046626Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762695293507008754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:510745,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=75140782-cab3-44d6-90a5-5f236cce2fc0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:34:53 addons-640912 crio[808]: time="2025-11-09 13:34:53.508374267Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc590c7b-3ba4-4340-b77b-0feaa8c4d278 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:34:53 addons-640912 crio[808]: time="2025-11-09 13:34:53.508963745Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc590c7b-3ba4-4340-b77b-0feaa8c4d278 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:34:53 addons-640912 crio[808]: time="2025-11-09 13:34:53.509626497Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eacf871a61d3462bce510b8a7be4d66a24fdc0a53616b1f2526147ab839b5856,PodSandboxId:33d3042607b18515964bf61cd134e3070fa66eff00570ae2814c08265872d03b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1762695090213216833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 495cc12a-d51f-43be-a567-96a5b4fad03a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f006ca3c6df3652caf34bc932e6e33524e0f2033958bcc7ac29db42f159f478,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1762695080346362124,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7f113792ee1aeacca2bc95d2857de71fa6874b3ee951659efa28320a524532,PodSandboxId:1b24a0719053d6cafc5b1179a4660b7383b3ad3fee86821846956848a958c4f6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1762695078417836968,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-8j7xf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 94b07f23-7caa-4ac1-8abc-174660a2f7a4,},Annotations:map[string]string{io.kubernetes.container.hash
: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:bcb1af463fabe45efed49bfd6e02c5935505b1c97fef7131674ae2149a586b22,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6a
a2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1762695070011057395,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2db81d254d4480693b2357d5d2f213cb9d2804ed6a941205876e88393cd0017,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1762695068080434957,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb391b54abedc41ac9d3bcc30eee9c5cd3b25472ad3e6eff849b0e56456bcbc,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1762695066793520772,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88235c8c760041e04a7d8b1282c1c320e7605e8c3b3d893a904d39e3fa00cad1,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1762695064197231200,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08612da2da66df5a5111cdfafd6a0836d6f81c959759b15c0c9375639120d746,PodSandboxId:8fb95cb2623328f10eff3639cf25bd416d368d576613f64dd7bda936355116df,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1762695062447477858,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5fea9b-7c9b-4665-ac68-5e296d36729f,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dad22d6eb19cdb7a777287c38e623b79c1047aec888ff4247d50097f0dcbf9d3,PodSandboxId:98a0f80edfe46561b7f02fb9fa6e4bfcfc1f280de0c3016fa890830d94c013a4,Metadata:&ContainerMetadata{Name
:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1762695060433730190,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-pgl85,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9a227fb-a833-4bc3-928b-eacf5e94bd0f,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a5967370a8e6321b89d46ce0959546f53b9081e6af174d4ed2f04897d68e3e8,PodSandboxId:2dc442c25fff12adbc1b9b8
800fafd8bb42c57ba7ebc983153c384dd3e34bd78,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1762695060318700326,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d822fdee-fb25-4634-83b9-e9da33b6b333,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41eca9a1773d7ff95373be1bf07b6899eb28ba4b0529d7612588ab6ad1febc3,PodSandbox
Id:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1762695058637139160,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:5a862686cb4d2c0e54448fa8a8311708c3ae04778fab3490fa67801913b4a055,PodSandboxId:00a46d438634f1c852fdbf4f05d1491e38e2c51de289efbd8dd87e8acae44088,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1762695057053274602,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7kdd8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9535a584-09d0-470c-bdca-f8b70a29fe14,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e63cfc2b183fa3210c39dd451f348428dc1b2b33acc056d3b3d495017fc722,PodSandboxId:ee70cbc29ea09381c413501b957e8aa6802592bc217dfbc25edd106492661579,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1762695056922801001,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-qk9k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 548acda2-9430-4b25-a3a8-09e0a17aa95f,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e9e9c4d34d47e7d5a67e92ca9d8eb33ab0fd0ca3ffc05476e8c6342d0a0e7e,PodSandboxId:a3f82e39fba77bdb98d98684962717a0ca32dec2da0c643b2022e7041b41dcd1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1762695054467740038,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-kj7f9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a16c39e-0afd-467d-9a68-c565ad3f14d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:739d0c9c47536b186d7447182ed8df1343ec1d122e8e76cad412862f797c12bf,PodSandboxId:fb5954c37633088b1f1ef24e5334f36facec56f88aa0ea9f1a348dc0e920f799,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1762695053004542480,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-s9qqg,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 40fd720d-45ec-44da-81cf-484b9ed910af,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69fe297c1b50ac2bcfff0e0f2e8357c77f8446eda32f448474242c45e7beef16,PodSandboxId:57ab048400abbbc313169bbbd7ac0890f37bd48587fbee368b27c41aff728264,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1762695034556022840,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72b9e2-abd1-49dd-b3cb-155aafc6e442,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfab18621429e6c41f473ab17e6f86a03dc38ea44f3a31f18af4b9bbfc4c4874,PodSandboxId:4d885cc41b56cb4c4a8b9e39ac9027f00f4b68b9e2c742ea8006768f27f47e22,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1762695014619786271,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gp
u-device-plugin-2tv7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0019249b-f40e-4609-b592-f9fcc146c80a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb6f2c7163352aca0e271bcb0b20322b3ac5af9c07044f0d2523de5bd207ba1,PodSandboxId:346d7ee8b9728fdb09d0b5fb01ac8c855b8a4ae22b2702c9a4b14f42f894a970,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762695002979098849,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-pr
ovisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59b766d9-adb7-4e0c-bc68-9d9cf8fbbdba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdc72298c5067f27022c29a3fba2313bef279aa1262921b54b2abfeb1bcfd05,PodSandboxId:f734f4ea6404b3851d49fa04e9e7d50d1b5a1dd1811142e147a667e58d9e5c61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762694992805161180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xtt8z,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 4c0e27e8-3047-4a17-9435-f9185e872696,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d0daf4cf92a3d52b6f1d92f9a1bfeeac5d7a7f31da83e4fb7df39268b3e4ea5,PodSandboxId:28544be4ccc8d274ee0f1ce693a4962e7eb2818d5bbf182e93a9e060de954441,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04
c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762694991896517056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8hbf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97813667-ffbc-4b8a-a122-3fa531d57ee3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1939a4061bbfb63f842de0036168e14c4e2c4e170bc480e3de75bc49ea789293,PodSandboxId:8461abff35ed3e424ad88b8d770b8fd6f1d6ca8979d38ab40c0900b1f9daa2aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762694978374008451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f2f0cef7cfff2538acb5ffb3152000c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f516d00cd42560c02c8218477c1f740c22bdc3e04d80a67197f40634f93e478e,PodSandboxId:9d5b3d3ae012e6b1f760709c59c955b1738d0410b8a917c420fa67cd1ad2af07,Metadata:&ContainerMetadata{N
ame:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762694978316917120,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50977dcfe4ea6e6a61a3e7cf80dace1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5312ba3c9de673540c31b087b548f07f9144cbe046b292ca5260fa3a2418c7,PodSandboxId:8cb548decbe810fcc34c
2b7b41b0518f383ba44d30236dad818bfd86a008c2a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762694978350105060,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4ac15728fbe3a146e056bd33fb08144,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:b5f31d63b316b6b4fceaf2cd37baa19b18c8dd0332189d49ddc503733e9f8479,PodSandboxId:82cda88284e7044d38c14febb7d1ee51977ecbd36b41815ac224232e7fb2f5bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762694978329597383,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40db61fb06568701553ada1b7a8540a0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc590c7b-3ba4-4340-b77b-0feaa8c4d278 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:34:53 addons-640912 crio[808]: time="2025-11-09 13:34:53.564269470Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=40f09d0b-a29d-455d-9256-5e93e9b9ea2d name=/runtime.v1.RuntimeService/Version
	Nov 09 13:34:53 addons-640912 crio[808]: time="2025-11-09 13:34:53.564399625Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=40f09d0b-a29d-455d-9256-5e93e9b9ea2d name=/runtime.v1.RuntimeService/Version
	Nov 09 13:34:53 addons-640912 crio[808]: time="2025-11-09 13:34:53.566449975Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=15fa40c7-1d7b-4404-8c6a-3f42a64c0675 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:34:53 addons-640912 crio[808]: time="2025-11-09 13:34:53.569331861Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762695293569201365,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:510745,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15fa40c7-1d7b-4404-8c6a-3f42a64c0675 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:34:53 addons-640912 crio[808]: time="2025-11-09 13:34:53.570992882Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a946604-b4a2-4c9d-b332-9109c3e6326f name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:34:53 addons-640912 crio[808]: time="2025-11-09 13:34:53.571121020Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a946604-b4a2-4c9d-b332-9109c3e6326f name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:34:53 addons-640912 crio[808]: time="2025-11-09 13:34:53.571667750Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eacf871a61d3462bce510b8a7be4d66a24fdc0a53616b1f2526147ab839b5856,PodSandboxId:33d3042607b18515964bf61cd134e3070fa66eff00570ae2814c08265872d03b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1762695090213216833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 495cc12a-d51f-43be-a567-96a5b4fad03a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f006ca3c6df3652caf34bc932e6e33524e0f2033958bcc7ac29db42f159f478,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1762695080346362124,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7f113792ee1aeacca2bc95d2857de71fa6874b3ee951659efa28320a524532,PodSandboxId:1b24a0719053d6cafc5b1179a4660b7383b3ad3fee86821846956848a958c4f6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1762695078417836968,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-8j7xf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 94b07f23-7caa-4ac1-8abc-174660a2f7a4,},Annotations:map[string]string{io.kubernetes.container.hash
: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:bcb1af463fabe45efed49bfd6e02c5935505b1c97fef7131674ae2149a586b22,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6a
a2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1762695070011057395,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2db81d254d4480693b2357d5d2f213cb9d2804ed6a941205876e88393cd0017,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1762695068080434957,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb391b54abedc41ac9d3bcc30eee9c5cd3b25472ad3e6eff849b0e56456bcbc,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1762695066793520772,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88235c8c760041e04a7d8b1282c1c320e7605e8c3b3d893a904d39e3fa00cad1,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1762695064197231200,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08612da2da66df5a5111cdfafd6a0836d6f81c959759b15c0c9375639120d746,PodSandboxId:8fb95cb2623328f10eff3639cf25bd416d368d576613f64dd7bda936355116df,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1762695062447477858,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5fea9b-7c9b-4665-ac68-5e296d36729f,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dad22d6eb19cdb7a777287c38e623b79c1047aec888ff4247d50097f0dcbf9d3,PodSandboxId:98a0f80edfe46561b7f02fb9fa6e4bfcfc1f280de0c3016fa890830d94c013a4,Metadata:&ContainerMetadata{Name
:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1762695060433730190,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-pgl85,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9a227fb-a833-4bc3-928b-eacf5e94bd0f,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a5967370a8e6321b89d46ce0959546f53b9081e6af174d4ed2f04897d68e3e8,PodSandboxId:2dc442c25fff12adbc1b9b8
800fafd8bb42c57ba7ebc983153c384dd3e34bd78,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1762695060318700326,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d822fdee-fb25-4634-83b9-e9da33b6b333,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41eca9a1773d7ff95373be1bf07b6899eb28ba4b0529d7612588ab6ad1febc3,PodSandbox
Id:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1762695058637139160,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:5a862686cb4d2c0e54448fa8a8311708c3ae04778fab3490fa67801913b4a055,PodSandboxId:00a46d438634f1c852fdbf4f05d1491e38e2c51de289efbd8dd87e8acae44088,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1762695057053274602,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7kdd8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9535a584-09d0-470c-bdca-f8b70a29fe14,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e63cfc2b183fa3210c39dd451f348428dc1b2b33acc056d3b3d495017fc722,PodSandboxId:ee70cbc29ea09381c413501b957e8aa6802592bc217dfbc25edd106492661579,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1762695056922801001,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-qk9k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 548acda2-9430-4b25-a3a8-09e0a17aa95f,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e9e9c4d34d47e7d5a67e92ca9d8eb33ab0fd0ca3ffc05476e8c6342d0a0e7e,PodSandboxId:a3f82e39fba77bdb98d98684962717a0ca32dec2da0c643b2022e7041b41dcd1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1762695054467740038,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-kj7f9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a16c39e-0afd-467d-9a68-c565ad3f14d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:739d0c9c47536b186d7447182ed8df1343ec1d122e8e76cad412862f797c12bf,PodSandboxId:fb5954c37633088b1f1ef24e5334f36facec56f88aa0ea9f1a348dc0e920f799,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1762695053004542480,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-s9qqg,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 40fd720d-45ec-44da-81cf-484b9ed910af,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69fe297c1b50ac2bcfff0e0f2e8357c77f8446eda32f448474242c45e7beef16,PodSandboxId:57ab048400abbbc313169bbbd7ac0890f37bd48587fbee368b27c41aff728264,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1762695034556022840,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72b9e2-abd1-49dd-b3cb-155aafc6e442,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfab18621429e6c41f473ab17e6f86a03dc38ea44f3a31f18af4b9bbfc4c4874,PodSandboxId:4d885cc41b56cb4c4a8b9e39ac9027f00f4b68b9e2c742ea8006768f27f47e22,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1762695014619786271,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gp
u-device-plugin-2tv7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0019249b-f40e-4609-b592-f9fcc146c80a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb6f2c7163352aca0e271bcb0b20322b3ac5af9c07044f0d2523de5bd207ba1,PodSandboxId:346d7ee8b9728fdb09d0b5fb01ac8c855b8a4ae22b2702c9a4b14f42f894a970,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762695002979098849,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-pr
ovisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59b766d9-adb7-4e0c-bc68-9d9cf8fbbdba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdc72298c5067f27022c29a3fba2313bef279aa1262921b54b2abfeb1bcfd05,PodSandboxId:f734f4ea6404b3851d49fa04e9e7d50d1b5a1dd1811142e147a667e58d9e5c61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762694992805161180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xtt8z,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 4c0e27e8-3047-4a17-9435-f9185e872696,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d0daf4cf92a3d52b6f1d92f9a1bfeeac5d7a7f31da83e4fb7df39268b3e4ea5,PodSandboxId:28544be4ccc8d274ee0f1ce693a4962e7eb2818d5bbf182e93a9e060de954441,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04
c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762694991896517056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8hbf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97813667-ffbc-4b8a-a122-3fa531d57ee3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1939a4061bbfb63f842de0036168e14c4e2c4e170bc480e3de75bc49ea789293,PodSandboxId:8461abff35ed3e424ad88b8d770b8fd6f1d6ca8979d38ab40c0900b1f9daa2aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762694978374008451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f2f0cef7cfff2538acb5ffb3152000c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f516d00cd42560c02c8218477c1f740c22bdc3e04d80a67197f40634f93e478e,PodSandboxId:9d5b3d3ae012e6b1f760709c59c955b1738d0410b8a917c420fa67cd1ad2af07,Metadata:&ContainerMetadata{N
ame:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762694978316917120,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50977dcfe4ea6e6a61a3e7cf80dace1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5312ba3c9de673540c31b087b548f07f9144cbe046b292ca5260fa3a2418c7,PodSandboxId:8cb548decbe810fcc34c
2b7b41b0518f383ba44d30236dad818bfd86a008c2a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762694978350105060,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4ac15728fbe3a146e056bd33fb08144,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:b5f31d63b316b6b4fceaf2cd37baa19b18c8dd0332189d49ddc503733e9f8479,PodSandboxId:82cda88284e7044d38c14febb7d1ee51977ecbd36b41815ac224232e7fb2f5bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762694978329597383,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40db61fb06568701553ada1b7a8540a0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a946604-b4a2-4c9d-b332-9109c3e6326f name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:34:53 addons-640912 crio[808]: time="2025-11-09 13:34:53.626204147Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd456576-fd11-4b18-9faf-7ebf6df071ca name=/runtime.v1.RuntimeService/Version
	Nov 09 13:34:53 addons-640912 crio[808]: time="2025-11-09 13:34:53.626377681Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd456576-fd11-4b18-9faf-7ebf6df071ca name=/runtime.v1.RuntimeService/Version
	Nov 09 13:34:53 addons-640912 crio[808]: time="2025-11-09 13:34:53.630472364Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91ad9ce3-de20-4f68-92fd-33d23ed82267 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:34:53 addons-640912 crio[808]: time="2025-11-09 13:34:53.631819983Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762695293631786608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:510745,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91ad9ce3-de20-4f68-92fd-33d23ed82267 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:34:53 addons-640912 crio[808]: time="2025-11-09 13:34:53.633240401Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e02c791d-a5f5-4b41-9c88-3d546456b350 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:34:53 addons-640912 crio[808]: time="2025-11-09 13:34:53.633446185Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e02c791d-a5f5-4b41-9c88-3d546456b350 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:34:53 addons-640912 crio[808]: time="2025-11-09 13:34:53.634716147Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eacf871a61d3462bce510b8a7be4d66a24fdc0a53616b1f2526147ab839b5856,PodSandboxId:33d3042607b18515964bf61cd134e3070fa66eff00570ae2814c08265872d03b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1762695090213216833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 495cc12a-d51f-43be-a567-96a5b4fad03a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f006ca3c6df3652caf34bc932e6e33524e0f2033958bcc7ac29db42f159f478,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1762695080346362124,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7f113792ee1aeacca2bc95d2857de71fa6874b3ee951659efa28320a524532,PodSandboxId:1b24a0719053d6cafc5b1179a4660b7383b3ad3fee86821846956848a958c4f6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1762695078417836968,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-8j7xf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 94b07f23-7caa-4ac1-8abc-174660a2f7a4,},Annotations:map[string]string{io.kubernetes.container.hash
: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:bcb1af463fabe45efed49bfd6e02c5935505b1c97fef7131674ae2149a586b22,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6a
a2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1762695070011057395,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2db81d254d4480693b2357d5d2f213cb9d2804ed6a941205876e88393cd0017,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1762695068080434957,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb391b54abedc41ac9d3bcc30eee9c5cd3b25472ad3e6eff849b0e56456bcbc,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1762695066793520772,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88235c8c760041e04a7d8b1282c1c320e7605e8c3b3d893a904d39e3fa00cad1,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1762695064197231200,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08612da2da66df5a5111cdfafd6a0836d6f81c959759b15c0c9375639120d746,PodSandboxId:8fb95cb2623328f10eff3639cf25bd416d368d576613f64dd7bda936355116df,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1762695062447477858,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5fea9b-7c9b-4665-ac68-5e296d36729f,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dad22d6eb19cdb7a777287c38e623b79c1047aec888ff4247d50097f0dcbf9d3,PodSandboxId:98a0f80edfe46561b7f02fb9fa6e4bfcfc1f280de0c3016fa890830d94c013a4,Metadata:&ContainerMetadata{Name
:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1762695060433730190,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-pgl85,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9a227fb-a833-4bc3-928b-eacf5e94bd0f,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a5967370a8e6321b89d46ce0959546f53b9081e6af174d4ed2f04897d68e3e8,PodSandboxId:2dc442c25fff12adbc1b9b8
800fafd8bb42c57ba7ebc983153c384dd3e34bd78,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1762695060318700326,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d822fdee-fb25-4634-83b9-e9da33b6b333,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41eca9a1773d7ff95373be1bf07b6899eb28ba4b0529d7612588ab6ad1febc3,PodSandbox
Id:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1762695058637139160,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:5a862686cb4d2c0e54448fa8a8311708c3ae04778fab3490fa67801913b4a055,PodSandboxId:00a46d438634f1c852fdbf4f05d1491e38e2c51de289efbd8dd87e8acae44088,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1762695057053274602,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7kdd8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9535a584-09d0-470c-bdca-f8b70a29fe14,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e63cfc2b183fa3210c39dd451f348428dc1b2b33acc056d3b3d495017fc722,PodSandboxId:ee70cbc29ea09381c413501b957e8aa6802592bc217dfbc25edd106492661579,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1762695056922801001,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-qk9k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 548acda2-9430-4b25-a3a8-09e0a17aa95f,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e9e9c4d34d47e7d5a67e92ca9d8eb33ab0fd0ca3ffc05476e8c6342d0a0e7e,PodSandboxId:a3f82e39fba77bdb98d98684962717a0ca32dec2da0c643b2022e7041b41dcd1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1762695054467740038,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-kj7f9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a16c39e-0afd-467d-9a68-c565ad3f14d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:739d0c9c47536b186d7447182ed8df1343ec1d122e8e76cad412862f797c12bf,PodSandboxId:fb5954c37633088b1f1ef24e5334f36facec56f88aa0ea9f1a348dc0e920f799,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1762695053004542480,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-s9qqg,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 40fd720d-45ec-44da-81cf-484b9ed910af,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69fe297c1b50ac2bcfff0e0f2e8357c77f8446eda32f448474242c45e7beef16,PodSandboxId:57ab048400abbbc313169bbbd7ac0890f37bd48587fbee368b27c41aff728264,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1762695034556022840,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72b9e2-abd1-49dd-b3cb-155aafc6e442,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfab18621429e6c41f473ab17e6f86a03dc38ea44f3a31f18af4b9bbfc4c4874,PodSandboxId:4d885cc41b56cb4c4a8b9e39ac9027f00f4b68b9e2c742ea8006768f27f47e22,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1762695014619786271,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gp
u-device-plugin-2tv7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0019249b-f40e-4609-b592-f9fcc146c80a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb6f2c7163352aca0e271bcb0b20322b3ac5af9c07044f0d2523de5bd207ba1,PodSandboxId:346d7ee8b9728fdb09d0b5fb01ac8c855b8a4ae22b2702c9a4b14f42f894a970,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762695002979098849,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-pr
ovisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59b766d9-adb7-4e0c-bc68-9d9cf8fbbdba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdc72298c5067f27022c29a3fba2313bef279aa1262921b54b2abfeb1bcfd05,PodSandboxId:f734f4ea6404b3851d49fa04e9e7d50d1b5a1dd1811142e147a667e58d9e5c61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762694992805161180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xtt8z,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 4c0e27e8-3047-4a17-9435-f9185e872696,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d0daf4cf92a3d52b6f1d92f9a1bfeeac5d7a7f31da83e4fb7df39268b3e4ea5,PodSandboxId:28544be4ccc8d274ee0f1ce693a4962e7eb2818d5bbf182e93a9e060de954441,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04
c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762694991896517056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8hbf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97813667-ffbc-4b8a-a122-3fa531d57ee3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1939a4061bbfb63f842de0036168e14c4e2c4e170bc480e3de75bc49ea789293,PodSandboxId:8461abff35ed3e424ad88b8d770b8fd6f1d6ca8979d38ab40c0900b1f9daa2aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762694978374008451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f2f0cef7cfff2538acb5ffb3152000c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f516d00cd42560c02c8218477c1f740c22bdc3e04d80a67197f40634f93e478e,PodSandboxId:9d5b3d3ae012e6b1f760709c59c955b1738d0410b8a917c420fa67cd1ad2af07,Metadata:&ContainerMetadata{N
ame:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762694978316917120,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50977dcfe4ea6e6a61a3e7cf80dace1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5312ba3c9de673540c31b087b548f07f9144cbe046b292ca5260fa3a2418c7,PodSandboxId:8cb548decbe810fcc34c
2b7b41b0518f383ba44d30236dad818bfd86a008c2a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762694978350105060,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4ac15728fbe3a146e056bd33fb08144,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:b5f31d63b316b6b4fceaf2cd37baa19b18c8dd0332189d49ddc503733e9f8479,PodSandboxId:82cda88284e7044d38c14febb7d1ee51977ecbd36b41815ac224232e7fb2f5bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762694978329597383,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40db61fb06568701553ada1b7a8540a0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e02c791d-a5f5-4b41-9c88-3d546456b350 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:34:53 addons-640912 crio[808]: time="2025-11-09 13:34:53.692095580Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=30068e37-dbd1-4d2e-9824-42a03d09633b name=/runtime.v1.RuntimeService/Version
	Nov 09 13:34:53 addons-640912 crio[808]: time="2025-11-09 13:34:53.692210967Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=30068e37-dbd1-4d2e-9824-42a03d09633b name=/runtime.v1.RuntimeService/Version
	Nov 09 13:34:53 addons-640912 crio[808]: time="2025-11-09 13:34:53.694550169Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e2eaf831-3e7b-4581-9f0f-4ddc64c5b9df name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:34:53 addons-640912 crio[808]: time="2025-11-09 13:34:53.696663697Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762695293696625278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:510745,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e2eaf831-3e7b-4581-9f0f-4ddc64c5b9df name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:34:53 addons-640912 crio[808]: time="2025-11-09 13:34:53.698319667Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8126a2b0-0815-4972-ae44-c3e9d2af09ae name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:34:53 addons-640912 crio[808]: time="2025-11-09 13:34:53.698428553Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8126a2b0-0815-4972-ae44-c3e9d2af09ae name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:34:53 addons-640912 crio[808]: time="2025-11-09 13:34:53.699023156Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eacf871a61d3462bce510b8a7be4d66a24fdc0a53616b1f2526147ab839b5856,PodSandboxId:33d3042607b18515964bf61cd134e3070fa66eff00570ae2814c08265872d03b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1762695090213216833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 495cc12a-d51f-43be-a567-96a5b4fad03a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f006ca3c6df3652caf34bc932e6e33524e0f2033958bcc7ac29db42f159f478,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1762695080346362124,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c7f113792ee1aeacca2bc95d2857de71fa6874b3ee951659efa28320a524532,PodSandboxId:1b24a0719053d6cafc5b1179a4660b7383b3ad3fee86821846956848a958c4f6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1762695078417836968,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-8j7xf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 94b07f23-7caa-4ac1-8abc-174660a2f7a4,},Annotations:map[string]string{io.kubernetes.container.hash
: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:bcb1af463fabe45efed49bfd6e02c5935505b1c97fef7131674ae2149a586b22,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6a
a2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1762695070011057395,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2db81d254d4480693b2357d5d2f213cb9d2804ed6a941205876e88393cd0017,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1762695068080434957,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb391b54abedc41ac9d3bcc30eee9c5cd3b25472ad3e6eff849b0e56456bcbc,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1762695066793520772,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88235c8c760041e04a7d8b1282c1c320e7605e8c3b3d893a904d39e3fa00cad1,PodSandboxId:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1762695064197231200,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08612da2da66df5a5111cdfafd6a0836d6f81c959759b15c0c9375639120d746,PodSandboxId:8fb95cb2623328f10eff3639cf25bd416d368d576613f64dd7bda936355116df,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1762695062447477858,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5fea9b-7c9b-4665-ac68-5e296d36729f,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dad22d6eb19cdb7a777287c38e623b79c1047aec888ff4247d50097f0dcbf9d3,PodSandboxId:98a0f80edfe46561b7f02fb9fa6e4bfcfc1f280de0c3016fa890830d94c013a4,Metadata:&ContainerMetadata{Name
:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1762695060433730190,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-pgl85,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9a227fb-a833-4bc3-928b-eacf5e94bd0f,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a5967370a8e6321b89d46ce0959546f53b9081e6af174d4ed2f04897d68e3e8,PodSandboxId:2dc442c25fff12adbc1b9b8
800fafd8bb42c57ba7ebc983153c384dd3e34bd78,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1762695060318700326,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d822fdee-fb25-4634-83b9-e9da33b6b333,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41eca9a1773d7ff95373be1bf07b6899eb28ba4b0529d7612588ab6ad1febc3,PodSandbox
Id:464368ae5553361a91b1a1d64c338f9c31a390723baf9aa1e3529aea5aeb13ef,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1762695058637139160,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-9dzzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab8236e3-2fb1-49f8-8fee-3f16fc4b3ca8,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:5a862686cb4d2c0e54448fa8a8311708c3ae04778fab3490fa67801913b4a055,PodSandboxId:00a46d438634f1c852fdbf4f05d1491e38e2c51de289efbd8dd87e8acae44088,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1762695057053274602,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7kdd8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9535a584-09d0-470c-bdca-f8b70a29fe14,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e63cfc2b183fa3210c39dd451f348428dc1b2b33acc056d3b3d495017fc722,PodSandboxId:ee70cbc29ea09381c413501b957e8aa6802592bc217dfbc25edd106492661579,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1762695056922801001,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-qk9k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 548acda2-9430-4b25-a3a8-09e0a17aa95f,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e9e9c4d34d47e7d5a67e92ca9d8eb33ab0fd0ca3ffc05476e8c6342d0a0e7e,PodSandboxId:a3f82e39fba77bdb98d98684962717a0ca32dec2da0c643b2022e7041b41dcd1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1762695054467740038,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-kj7f9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a16c39e-0afd-467d-9a68-c565ad3f14d1,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:739d0c9c47536b186d7447182ed8df1343ec1d122e8e76cad412862f797c12bf,PodSandboxId:fb5954c37633088b1f1ef24e5334f36facec56f88aa0ea9f1a348dc0e920f799,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1762695053004542480,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-s9qqg,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 40fd720d-45ec-44da-81cf-484b9ed910af,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69fe297c1b50ac2bcfff0e0f2e8357c77f8446eda32f448474242c45e7beef16,PodSandboxId:57ab048400abbbc313169bbbd7ac0890f37bd48587fbee368b27c41aff728264,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1762695034556022840,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa72b9e2-abd1-49dd-b3cb-155aafc6e442,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfab18621429e6c41f473ab17e6f86a03dc38ea44f3a31f18af4b9bbfc4c4874,PodSandboxId:4d885cc41b56cb4c4a8b9e39ac9027f00f4b68b9e2c742ea8006768f27f47e22,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1762695014619786271,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gp
u-device-plugin-2tv7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0019249b-f40e-4609-b592-f9fcc146c80a,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb6f2c7163352aca0e271bcb0b20322b3ac5af9c07044f0d2523de5bd207ba1,PodSandboxId:346d7ee8b9728fdb09d0b5fb01ac8c855b8a4ae22b2702c9a4b14f42f894a970,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762695002979098849,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-pr
ovisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59b766d9-adb7-4e0c-bc68-9d9cf8fbbdba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecdc72298c5067f27022c29a3fba2313bef279aa1262921b54b2abfeb1bcfd05,PodSandboxId:f734f4ea6404b3851d49fa04e9e7d50d1b5a1dd1811142e147a667e58d9e5c61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762694992805161180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xtt8z,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 4c0e27e8-3047-4a17-9435-f9185e872696,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d0daf4cf92a3d52b6f1d92f9a1bfeeac5d7a7f31da83e4fb7df39268b3e4ea5,PodSandboxId:28544be4ccc8d274ee0f1ce693a4962e7eb2818d5bbf182e93a9e060de954441,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04
c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762694991896517056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8hbf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97813667-ffbc-4b8a-a122-3fa531d57ee3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1939a4061bbfb63f842de0036168e14c4e2c4e170bc480e3de75bc49ea789293,PodSandboxId:8461abff35ed3e424ad88b8d770b8fd6f1d6ca8979d38ab40c0900b1f9daa2aa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762694978374008451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f2f0cef7cfff2538acb5ffb3152000c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f516d00cd42560c02c8218477c1f740c22bdc3e04d80a67197f40634f93e478e,PodSandboxId:9d5b3d3ae012e6b1f760709c59c955b1738d0410b8a917c420fa67cd1ad2af07,Metadata:&ContainerMetadata{N
ame:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762694978316917120,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50977dcfe4ea6e6a61a3e7cf80dace1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5312ba3c9de673540c31b087b548f07f9144cbe046b292ca5260fa3a2418c7,PodSandboxId:8cb548decbe810fcc34c
2b7b41b0518f383ba44d30236dad818bfd86a008c2a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762694978350105060,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4ac15728fbe3a146e056bd33fb08144,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:b5f31d63b316b6b4fceaf2cd37baa19b18c8dd0332189d49ddc503733e9f8479,PodSandboxId:82cda88284e7044d38c14febb7d1ee51977ecbd36b41815ac224232e7fb2f5bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762694978329597383,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-640912,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40db61fb06568701553ada1b7a8540a0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8126a2b0-0815-4972-ae44-c3e9d2af09ae name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	eacf871a61d34       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          3 minutes ago       Running             busybox                                  0                   33d3042607b18       busybox
	3f006ca3c6df3       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          3 minutes ago       Running             csi-snapshotter                          0                   464368ae55533       csi-hostpathplugin-9dzzw
	6c7f113792ee1       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd                             3 minutes ago       Running             controller                               0                   1b24a0719053d       ingress-nginx-controller-675c5ddd98-8j7xf
	bcb1af463fabe       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago       Running             csi-provisioner                          0                   464368ae55533       csi-hostpathplugin-9dzzw
	c2db81d254d44       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago       Running             liveness-probe                           0                   464368ae55533       csi-hostpathplugin-9dzzw
	2bb391b54abed       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago       Running             hostpath                                 0                   464368ae55533       csi-hostpathplugin-9dzzw
	88235c8c76004       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago       Running             node-driver-registrar                    0                   464368ae55533       csi-hostpathplugin-9dzzw
	08612da2da66d       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago       Running             csi-resizer                              0                   8fb95cb262332       csi-hostpath-resizer-0
	dad22d6eb19cd       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago       Running             volume-snapshot-controller               0                   98a0f80edfe46       snapshot-controller-7d9fbc56b8-pgl85
	5a5967370a8e6       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago       Running             csi-attacher                             0                   2dc442c25fff1       csi-hostpath-attacher-0
	b41eca9a1773d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago       Running             csi-external-health-monitor-controller   0                   464368ae55533       csi-hostpathplugin-9dzzw
	5a862686cb4d2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   3 minutes ago       Exited              patch                                    0                   00a46d438634f       ingress-nginx-admission-patch-7kdd8
	34e63cfc2b183       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago       Running             volume-snapshot-controller               0                   ee70cbc29ea09       snapshot-controller-7d9fbc56b8-qk9k2
	52e9e9c4d34d4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   3 minutes ago       Exited              create                                   0                   a3f82e39fba77       ingress-nginx-admission-create-kj7f9
	739d0c9c47536       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             4 minutes ago       Running             local-path-provisioner                   0                   fb5954c376330       local-path-provisioner-648f6765c9-s9qqg
	69fe297c1b50a       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               4 minutes ago       Running             minikube-ingress-dns                     0                   57ab048400abb       kube-ingress-dns-minikube
	cfab18621429e       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     4 minutes ago       Running             amd-gpu-device-plugin                    0                   4d885cc41b56c       amd-gpu-device-plugin-2tv7p
	1bb6f2c716335       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             4 minutes ago       Running             storage-provisioner                      0                   346d7ee8b9728       storage-provisioner
	ecdc72298c506       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             5 minutes ago       Running             coredns                                  0                   f734f4ea6404b       coredns-66bc5c9577-xtt8z
	4d0daf4cf92a3       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             5 minutes ago       Running             kube-proxy                               0                   28544be4ccc8d       kube-proxy-8hbf4
	1939a4061bbfb       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             5 minutes ago       Running             kube-controller-manager                  0                   8461abff35ed3       kube-controller-manager-addons-640912
	7a5312ba3c9de       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             5 minutes ago       Running             kube-scheduler                           0                   8cb548decbe81       kube-scheduler-addons-640912
	b5f31d63b316b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             5 minutes ago       Running             kube-apiserver                           0                   82cda88284e70       kube-apiserver-addons-640912
	f516d00cd4256       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             5 minutes ago       Running             etcd                                     0                   9d5b3d3ae012e       etcd-addons-640912
	
	
	==> coredns [ecdc72298c5067f27022c29a3fba2313bef279aa1262921b54b2abfeb1bcfd05] <==
	[INFO] 10.244.0.8:56749 - 57800 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000167846s
	[INFO] 10.244.0.8:56749 - 7634 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000234715s
	[INFO] 10.244.0.8:56749 - 64775 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000214438s
	[INFO] 10.244.0.8:56749 - 27735 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000222714s
	[INFO] 10.244.0.8:56749 - 4667 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000235027s
	[INFO] 10.244.0.8:56749 - 32956 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000240757s
	[INFO] 10.244.0.8:56749 - 59149 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.001351156s
	[INFO] 10.244.0.8:47223 - 42964 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000217411s
	[INFO] 10.244.0.8:47223 - 43270 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.001085953s
	[INFO] 10.244.0.8:60054 - 39280 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000101116s
	[INFO] 10.244.0.8:60054 - 39607 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000284859s
	[INFO] 10.244.0.8:45885 - 39288 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0001299s
	[INFO] 10.244.0.8:45885 - 39507 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000087106s
	[INFO] 10.244.0.8:33022 - 41004 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000143608s
	[INFO] 10.244.0.8:33022 - 41467 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000090986s
	[INFO] 10.244.0.23:41923 - 2129 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000901948s
	[INFO] 10.244.0.23:37925 - 19699 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000411247s
	[INFO] 10.244.0.23:56154 - 55757 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000225694s
	[INFO] 10.244.0.23:55144 - 14584 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000195303s
	[INFO] 10.244.0.23:43131 - 45070 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000319047s
	[INFO] 10.244.0.23:59696 - 23369 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.002225751s
	[INFO] 10.244.0.23:45065 - 55293 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001506203s
	[INFO] 10.244.0.23:47314 - 7537 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.005372558s
	[INFO] 10.244.0.28:41385 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.003092641s
	[INFO] 10.244.0.28:50820 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.001765974s
	
	
	==> describe nodes <==
	Name:               addons-640912
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-640912
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=addons-640912
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T13_29_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-640912
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-640912"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:29:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-640912
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 13:34:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 13:32:50 +0000   Sun, 09 Nov 2025 13:29:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 13:32:50 +0000   Sun, 09 Nov 2025 13:29:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 13:32:50 +0000   Sun, 09 Nov 2025 13:29:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 13:32:50 +0000   Sun, 09 Nov 2025 13:29:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.228
	  Hostname:    addons-640912
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 50c2653cdfbf41f9bef0624b1a679070
	  System UUID:                50c2653c-dfbf-41f9-bef0-624b1a679070
	  Boot ID:                    92fab23c-5b35-498d-b1ae-dc16572c1ced
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m27s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m39s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m40s
	  default                     test-local-path                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-8j7xf    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m51s
	  kube-system                 amd-gpu-device-plugin-2tv7p                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 coredns-66bc5c9577-xtt8z                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m4s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 csi-hostpathplugin-9dzzw                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 etcd-addons-640912                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m9s
	  kube-system                 kube-apiserver-addons-640912                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-controller-manager-addons-640912        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-proxy-8hbf4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-scheduler-addons-640912                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 snapshot-controller-7d9fbc56b8-pgl85         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 snapshot-controller-7d9fbc56b8-qk9k2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  local-path-storage          local-path-provisioner-648f6765c9-s9qqg      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m     kube-proxy       
	  Normal  Starting                 5m10s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m9s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m9s   kubelet          Node addons-640912 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m9s   kubelet          Node addons-640912 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m9s   kubelet          Node addons-640912 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m8s   kubelet          Node addons-640912 status is now: NodeReady
	  Normal  RegisteredNode           5m5s   node-controller  Node addons-640912 event: Registered Node addons-640912 in Controller
	
	
	==> dmesg <==
	[  +0.000084] kauditd_printk_skb: 207 callbacks suppressed
	[Nov 9 13:30] kauditd_printk_skb: 123 callbacks suppressed
	[  +2.597925] kauditd_printk_skb: 235 callbacks suppressed
	[  +0.573763] kauditd_printk_skb: 410 callbacks suppressed
	[  +9.105607] kauditd_printk_skb: 35 callbacks suppressed
	[  +9.999909] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.891357] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.415789] kauditd_printk_skb: 122 callbacks suppressed
	[  +4.010962] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.104992] kauditd_printk_skb: 59 callbacks suppressed
	[Nov 9 13:31] kauditd_printk_skb: 87 callbacks suppressed
	[  +2.729784] kauditd_printk_skb: 111 callbacks suppressed
	[  +0.054539] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.206294] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.614998] kauditd_printk_skb: 32 callbacks suppressed
	[  +2.051708] kauditd_printk_skb: 47 callbacks suppressed
	[  +9.781817] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.000042] kauditd_printk_skb: 22 callbacks suppressed
	[  +3.743671] kauditd_printk_skb: 109 callbacks suppressed
	[  +3.183523] kauditd_printk_skb: 109 callbacks suppressed
	[Nov 9 13:32] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.000937] kauditd_printk_skb: 30 callbacks suppressed
	[ +12.098567] kauditd_printk_skb: 94 callbacks suppressed
	[  +6.595281] kauditd_printk_skb: 80 callbacks suppressed
	[Nov 9 13:33] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [f516d00cd42560c02c8218477c1f740c22bdc3e04d80a67197f40634f93e478e] <==
	{"level":"warn","ts":"2025-11-09T13:31:15.197375Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.099856ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T13:31:15.197462Z","caller":"traceutil/trace.go:172","msg":"trace[356284258] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1132; }","duration":"132.198445ms","start":"2025-11-09T13:31:15.065252Z","end":"2025-11-09T13:31:15.197451Z","steps":["trace[356284258] 'agreement among raft nodes before linearized reading'  (duration: 128.166813ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T13:31:15.198082Z","caller":"traceutil/trace.go:172","msg":"trace[1784177865] transaction","detail":"{read_only:false; response_revision:1133; number_of_response:1; }","duration":"153.692795ms","start":"2025-11-09T13:31:15.044376Z","end":"2025-11-09T13:31:15.198069Z","steps":["trace[1784177865] 'process raft request'  (duration: 148.980587ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T13:31:22.996146Z","caller":"traceutil/trace.go:172","msg":"trace[302697611] linearizableReadLoop","detail":"{readStateIndex:1194; appliedIndex:1194; }","duration":"165.040081ms","start":"2025-11-09T13:31:22.831088Z","end":"2025-11-09T13:31:22.996128Z","steps":["trace[302697611] 'read index received'  (duration: 165.034114ms)","trace[302697611] 'applied index is now lower than readState.Index'  (duration: 5.157µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T13:31:22.996293Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"165.199351ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T13:31:22.996314Z","caller":"traceutil/trace.go:172","msg":"trace[1678018842] range","detail":"{range_begin:/registry/poddisruptionbudgets; range_end:; response_count:0; response_revision:1160; }","duration":"165.253621ms","start":"2025-11-09T13:31:22.831055Z","end":"2025-11-09T13:31:22.996309Z","steps":["trace[1678018842] 'agreement among raft nodes before linearized reading'  (duration: 165.171034ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T13:31:22.997662Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.922771ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T13:31:22.998744Z","caller":"traceutil/trace.go:172","msg":"trace[1858999846] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1161; }","duration":"103.012616ms","start":"2025-11-09T13:31:22.895717Z","end":"2025-11-09T13:31:22.998730Z","steps":["trace[1858999846] 'agreement among raft nodes before linearized reading'  (duration: 101.899265ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T13:31:22.999100Z","caller":"traceutil/trace.go:172","msg":"trace[1451434482] transaction","detail":"{read_only:false; response_revision:1161; number_of_response:1; }","duration":"249.478691ms","start":"2025-11-09T13:31:22.749609Z","end":"2025-11-09T13:31:22.999088Z","steps":["trace[1451434482] 'process raft request'  (duration: 247.857862ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T13:31:52.754938Z","caller":"traceutil/trace.go:172","msg":"trace[6026568] linearizableReadLoop","detail":"{readStateIndex:1397; appliedIndex:1397; }","duration":"236.117273ms","start":"2025-11-09T13:31:52.518730Z","end":"2025-11-09T13:31:52.754847Z","steps":["trace[6026568] 'read index received'  (duration: 236.112503ms)","trace[6026568] 'applied index is now lower than readState.Index'  (duration: 4.061µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T13:31:52.755188Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"236.415585ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2025-11-09T13:31:52.755257Z","caller":"traceutil/trace.go:172","msg":"trace[6914757] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1354; }","duration":"236.519277ms","start":"2025-11-09T13:31:52.518725Z","end":"2025-11-09T13:31:52.755244Z","steps":["trace[6914757] 'agreement among raft nodes before linearized reading'  (duration: 236.32921ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T13:31:52.755661Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"185.569325ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" limit:1 ","response":"range_response_count:1 size:3395"}
	{"level":"info","ts":"2025-11-09T13:31:52.755687Z","caller":"traceutil/trace.go:172","msg":"trace[1620442481] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:1; response_revision:1355; }","duration":"185.600716ms","start":"2025-11-09T13:31:52.570080Z","end":"2025-11-09T13:31:52.755681Z","steps":["trace[1620442481] 'agreement among raft nodes before linearized reading'  (duration: 185.518604ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T13:31:52.755923Z","caller":"traceutil/trace.go:172","msg":"trace[1200344183] transaction","detail":"{read_only:false; response_revision:1355; number_of_response:1; }","duration":"304.583393ms","start":"2025-11-09T13:31:52.451331Z","end":"2025-11-09T13:31:52.755915Z","steps":["trace[1200344183] 'process raft request'  (duration: 304.178309ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T13:31:52.756031Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-09T13:31:52.451310Z","time spent":"304.631939ms","remote":"127.0.0.1:58684","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1343 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-11-09T13:31:55.033981Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"136.520258ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T13:31:55.034081Z","caller":"traceutil/trace.go:172","msg":"trace[553597333] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1365; }","duration":"136.623206ms","start":"2025-11-09T13:31:54.897438Z","end":"2025-11-09T13:31:55.034062Z","steps":["trace[553597333] 'range keys from in-memory index tree'  (duration: 136.438838ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T13:32:01.051010Z","caller":"traceutil/trace.go:172","msg":"trace[427081115] linearizableReadLoop","detail":"{readStateIndex:1451; appliedIndex:1451; }","duration":"321.984641ms","start":"2025-11-09T13:32:00.728995Z","end":"2025-11-09T13:32:01.050980Z","steps":["trace[427081115] 'read index received'  (duration: 321.978499ms)","trace[427081115] 'applied index is now lower than readState.Index'  (duration: 5.245µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T13:32:01.051205Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"322.326861ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-09T13:32:01.051230Z","caller":"traceutil/trace.go:172","msg":"trace[33595075] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1404; }","duration":"322.375104ms","start":"2025-11-09T13:32:00.728848Z","end":"2025-11-09T13:32:01.051224Z","steps":["trace[33595075] 'agreement among raft nodes before linearized reading'  (duration: 322.303091ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T13:32:01.052190Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.405283ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-11-09T13:32:01.052402Z","caller":"traceutil/trace.go:172","msg":"trace[1969419880] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1405; }","duration":"217.720748ms","start":"2025-11-09T13:32:00.834666Z","end":"2025-11-09T13:32:01.052387Z","steps":["trace[1969419880] 'agreement among raft nodes before linearized reading'  (duration: 217.090716ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T13:32:01.052589Z","caller":"traceutil/trace.go:172","msg":"trace[1054216716] transaction","detail":"{read_only:false; response_revision:1405; number_of_response:1; }","duration":"365.515044ms","start":"2025-11-09T13:32:00.687065Z","end":"2025-11-09T13:32:01.052580Z","steps":["trace[1054216716] 'process raft request'  (duration: 364.182623ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T13:32:01.052693Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-09T13:32:00.687045Z","time spent":"365.59912ms","remote":"127.0.0.1:58726","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3708,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/default/cloud-spanner-emulator-6f9fcf858b-v5gk4\" mod_revision:1404 > success:<request_put:<key:\"/registry/pods/default/cloud-spanner-emulator-6f9fcf858b-v5gk4\" value_size:3638 >> failure:<request_range:<key:\"/registry/pods/default/cloud-spanner-emulator-6f9fcf858b-v5gk4\" > >"}
	
	
	==> kernel <==
	 13:34:54 up 5 min,  0 users,  load average: 0.87, 1.71, 0.93
	Linux addons-640912 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Nov  1 20:49:51 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [b5f31d63b316b6b4fceaf2cd37baa19b18c8dd0332189d49ddc503733e9f8479] <==
	E1109 13:30:47.751126       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.99.89:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.99.89:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.99.89:443: connect: connection refused" logger="UnhandledError"
	E1109 13:30:47.752177       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.99.89:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.99.89:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.99.89:443: connect: connection refused" logger="UnhandledError"
	E1109 13:30:47.759123       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.99.89:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.99.89:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.99.89:443: connect: connection refused" logger="UnhandledError"
	W1109 13:30:48.744109       1 handler_proxy.go:99] no RequestInfo found in the context
	W1109 13:30:48.744125       1 handler_proxy.go:99] no RequestInfo found in the context
	E1109 13:30:48.744161       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1109 13:30:48.744175       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1109 13:30:48.744183       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1109 13:30:48.745368       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1109 13:30:52.828573       1 handler_proxy.go:99] no RequestInfo found in the context
	E1109 13:30:52.828619       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1109 13:30:52.832035       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.99.89:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.99.89:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1109 13:30:52.900702       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1109 13:30:52.919741       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1109 13:31:36.567406       1 conn.go:339] Error on socket receive: read tcp 192.168.39.228:8443->192.168.39.1:33552: use of closed network connection
	I1109 13:31:47.188509       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.149.219"}
	I1109 13:32:15.730439       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1109 13:32:15.992589       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.68.202"}
	I1109 13:32:53.848728       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [1939a4061bbfb63f842de0036168e14c4e2c4e170bc480e3de75bc49ea789293] <==
	I1109 13:29:49.612590       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-640912"
	I1109 13:29:49.612991       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1109 13:29:49.612713       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 13:29:49.613290       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 13:29:49.613297       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1109 13:29:49.613360       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1109 13:29:49.612798       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1109 13:29:49.616256       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1109 13:29:49.616606       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1109 13:29:49.618294       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1109 13:29:49.620056       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1109 13:29:49.620089       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	E1109 13:30:19.541345       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1109 13:30:19.542179       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1109 13:30:19.542277       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1109 13:30:19.638797       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1109 13:30:19.644003       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 13:30:19.657628       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1109 13:30:19.758998       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1109 13:30:49.652808       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1109 13:30:49.771791       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1109 13:31:50.564758       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I1109 13:32:14.119991       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I1109 13:32:16.526063       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I1109 13:32:49.285352       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	
	
	==> kube-proxy [4d0daf4cf92a3d52b6f1d92f9a1bfeeac5d7a7f31da83e4fb7df39268b3e4ea5] <==
	I1109 13:29:52.980421       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 13:29:53.082837       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 13:29:53.086021       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.228"]
	E1109 13:29:53.086130       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 13:29:53.751653       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1109 13:29:53.751799       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1109 13:29:53.751834       1 server_linux.go:132] "Using iptables Proxier"
	I1109 13:29:53.834205       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 13:29:53.836618       1 server.go:527] "Version info" version="v1.34.1"
	I1109 13:29:53.836664       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 13:29:53.846430       1 config.go:200] "Starting service config controller"
	I1109 13:29:53.846481       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 13:29:53.846506       1 config.go:106] "Starting endpoint slice config controller"
	I1109 13:29:53.846510       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 13:29:53.846520       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 13:29:53.846523       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 13:29:53.874452       1 config.go:309] "Starting node config controller"
	I1109 13:29:53.874500       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 13:29:53.874508       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 13:29:53.947795       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 13:29:53.947900       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 13:29:53.947945       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7a5312ba3c9de673540c31b087b548f07f9144cbe046b292ca5260fa3a2418c7] <==
	E1109 13:29:41.646013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 13:29:41.646142       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 13:29:41.646741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 13:29:41.647690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 13:29:41.647977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 13:29:41.648034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 13:29:42.450718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 13:29:42.531090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1109 13:29:42.551808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 13:29:42.573030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1109 13:29:42.613834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1109 13:29:42.617089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1109 13:29:42.636745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 13:29:42.745262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1109 13:29:42.747084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 13:29:42.809366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 13:29:42.869592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 13:29:42.934044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 13:29:42.941621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1109 13:29:42.985001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 13:29:43.033735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 13:29:43.088695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 13:29:43.123724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1109 13:29:43.146070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1109 13:29:44.634226       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 13:33:45 addons-640912 kubelet[1496]: E1109 13:33:45.573739    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762695225572823027  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:33:55 addons-640912 kubelet[1496]: E1109 13:33:55.577220    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762695235576536581  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:33:55 addons-640912 kubelet[1496]: E1109 13:33:55.577292    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762695235576536581  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:33:58 addons-640912 kubelet[1496]: E1109 13:33:58.828438    1496 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Nov 09 13:33:58 addons-640912 kubelet[1496]: E1109 13:33:58.828492    1496 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Nov 09 13:33:58 addons-640912 kubelet[1496]: E1109 13:33:58.829534    1496 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod test-local-path_default(24715673-6be0-4489-8fb3-064bda4b15c9): ErrImagePull: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 09 13:33:58 addons-640912 kubelet[1496]: E1109 13:33:58.829733    1496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="24715673-6be0-4489-8fb3-064bda4b15c9"
	Nov 09 13:34:05 addons-640912 kubelet[1496]: E1109 13:34:05.580102    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762695245579561972  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:34:05 addons-640912 kubelet[1496]: E1109 13:34:05.580132    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762695245579561972  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:34:06 addons-640912 kubelet[1496]: I1109 13:34:06.080085    1496 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 09 13:34:10 addons-640912 kubelet[1496]: E1109 13:34:10.083271    1496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="24715673-6be0-4489-8fb3-064bda4b15c9"
	Nov 09 13:34:15 addons-640912 kubelet[1496]: E1109 13:34:15.585006    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762695255583776587  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:34:15 addons-640912 kubelet[1496]: E1109 13:34:15.585045    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762695255583776587  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:34:22 addons-640912 kubelet[1496]: I1109 13:34:22.079958    1496 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-2tv7p" secret="" err="secret \"gcp-auth\" not found"
	Nov 09 13:34:25 addons-640912 kubelet[1496]: E1109 13:34:25.588697    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762695265588042884  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:34:25 addons-640912 kubelet[1496]: E1109 13:34:25.588730    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762695265588042884  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:34:28 addons-640912 kubelet[1496]: E1109 13:34:28.945053    1496 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 09 13:34:28 addons-640912 kubelet[1496]: E1109 13:34:28.945137    1496 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 09 13:34:28 addons-640912 kubelet[1496]: E1109 13:34:28.945364    1496 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(e7006701-5d88-4365-b100-377ce22b89cc): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 09 13:34:28 addons-640912 kubelet[1496]: E1109 13:34:28.945401    1496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="e7006701-5d88-4365-b100-377ce22b89cc"
	Nov 09 13:34:35 addons-640912 kubelet[1496]: E1109 13:34:35.592230    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762695275591280373  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:34:35 addons-640912 kubelet[1496]: E1109 13:34:35.592422    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762695275591280373  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:34:44 addons-640912 kubelet[1496]: E1109 13:34:44.079217    1496 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="e7006701-5d88-4365-b100-377ce22b89cc"
	Nov 09 13:34:45 addons-640912 kubelet[1496]: E1109 13:34:45.596379    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762695285595710839  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	Nov 09 13:34:45 addons-640912 kubelet[1496]: E1109 13:34:45.596408    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762695285595710839  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:510745}  inodes_used:{value:186}}"
	
	
	==> storage-provisioner [1bb6f2c7163352aca0e271bcb0b20322b3ac5af9c07044f0d2523de5bd207ba1] <==
	W1109 13:34:30.209138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:32.215724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:32.222835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:34.226142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:34.233176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:36.237786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:36.246944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:38.252447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:38.258562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:40.263766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:40.270687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:42.277252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:42.287177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:44.291130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:44.299960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:46.303511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:46.310641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:48.314723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:48.323347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:50.327638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:50.336594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:52.344510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:52.354010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:54.359572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:54.372137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-640912 -n addons-640912
helpers_test.go:269: (dbg) Run:  kubectl --context addons-640912 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-kj7f9 ingress-nginx-admission-patch-7kdd8
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-640912 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-kj7f9 ingress-nginx-admission-patch-7kdd8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-640912 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-kj7f9 ingress-nginx-admission-patch-7kdd8: exit status 1 (116.596456ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-640912/192.168.39.228
	Start Time:       Sun, 09 Nov 2025 13:32:15 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nxkzm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nxkzm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m40s                default-scheduler  Successfully assigned default/nginx to addons-640912
	  Warning  Failed     87s                  kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     87s                  kubelet            Error: ErrImagePull
	  Normal   BackOff    86s                  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     86s                  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    74s (x2 over 2m39s)  kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-640912/192.168.39.228
	Start Time:       Sun, 09 Nov 2025 13:32:14 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bmmc7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-bmmc7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  2m41s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-640912
	  Normal   Pulling    103s (x2 over 2m41s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     27s (x2 over 117s)    kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     27s (x2 over 117s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    11s (x2 over 116s)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     11s (x2 over 116s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-640912/192.168.39.228
	Start Time:       Sun, 09 Nov 2025 13:31:52 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sgzjt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-sgzjt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m3s                 default-scheduler  Successfully assigned default/test-local-path to addons-640912
	  Warning  Failed     2m29s                kubelet            Failed to pull image "busybox:stable": fetching target platform image selected from image index: reading manifest sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     57s (x2 over 2m29s)  kubelet            Error: ErrImagePull
	  Warning  Failed     57s                  kubelet            Failed to pull image "busybox:stable": reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    45s (x2 over 2m29s)  kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     45s (x2 over 2m29s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    33s (x3 over 2m59s)  kubelet            Pulling image "busybox:stable"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-kj7f9" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-7kdd8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-640912 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-kj7f9 ingress-nginx-admission-patch-7kdd8: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-640912 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-640912 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.071613479s)
--- FAIL: TestAddons/parallel/LocalPath (232.58s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (355.41s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-419649 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1109 13:46:27.378585  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:46:27.385192  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:46:27.396754  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:46:27.418695  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:46:27.460144  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:46:27.541822  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:46:27.703511  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:46:28.025295  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:46:28.667482  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:46:29.949200  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:46:32.511115  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:46:37.632932  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:46:47.875163  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:47:08.357428  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:47:49.320562  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:49:11.245497  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-419649 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5m52.92268158s)

                                                
                                                
-- stdout --
	* [functional-419649] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-549598/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-549598/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "functional-419649" primary control-plane node in "functional-419649" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-amd64 start -p functional-419649 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:776: restart took 5m52.922931299s for "functional-419649" cluster.
I1109 13:50:55.707684  553473 config.go:182] Loaded profile config "functional-419649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-419649 -n functional-419649
helpers_test.go:252: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-419649 logs -n 25: (1.869022734s)
helpers_test.go:260: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                     │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ nospam-909540 --log_dir /tmp/nospam-909540 unpause                                                          │ nospam-909540     │ jenkins │ v1.37.0 │ 09 Nov 25 13:42 UTC │ 09 Nov 25 13:42 UTC │
	│ unpause │ nospam-909540 --log_dir /tmp/nospam-909540 unpause                                                          │ nospam-909540     │ jenkins │ v1.37.0 │ 09 Nov 25 13:42 UTC │ 09 Nov 25 13:42 UTC │
	│ unpause │ nospam-909540 --log_dir /tmp/nospam-909540 unpause                                                          │ nospam-909540     │ jenkins │ v1.37.0 │ 09 Nov 25 13:42 UTC │ 09 Nov 25 13:42 UTC │
	│ stop    │ nospam-909540 --log_dir /tmp/nospam-909540 stop                                                             │ nospam-909540     │ jenkins │ v1.37.0 │ 09 Nov 25 13:42 UTC │ 09 Nov 25 13:42 UTC │
	│ stop    │ nospam-909540 --log_dir /tmp/nospam-909540 stop                                                             │ nospam-909540     │ jenkins │ v1.37.0 │ 09 Nov 25 13:42 UTC │ 09 Nov 25 13:42 UTC │
	│ stop    │ nospam-909540 --log_dir /tmp/nospam-909540 stop                                                             │ nospam-909540     │ jenkins │ v1.37.0 │ 09 Nov 25 13:42 UTC │ 09 Nov 25 13:42 UTC │
	│ delete  │ -p nospam-909540                                                                                            │ nospam-909540     │ jenkins │ v1.37.0 │ 09 Nov 25 13:42 UTC │ 09 Nov 25 13:42 UTC │
	│ start   │ -p functional-419649 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:42 UTC │ 09 Nov 25 13:44 UTC │
	│ start   │ -p functional-419649 --alsologtostderr -v=8                                                                 │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:44 UTC │ 09 Nov 25 13:44 UTC │
	│ cache   │ functional-419649 cache add registry.k8s.io/pause:3.1                                                       │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:44 UTC │ 09 Nov 25 13:44 UTC │
	│ cache   │ functional-419649 cache add registry.k8s.io/pause:3.3                                                       │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:44 UTC │ 09 Nov 25 13:44 UTC │
	│ cache   │ functional-419649 cache add registry.k8s.io/pause:latest                                                    │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:44 UTC │ 09 Nov 25 13:44 UTC │
	│ cache   │ functional-419649 cache add minikube-local-cache-test:functional-419649                                     │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:44 UTC │ 09 Nov 25 13:45 UTC │
	│ cache   │ functional-419649 cache delete minikube-local-cache-test:functional-419649                                  │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:45 UTC │ 09 Nov 25 13:45 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                            │ minikube          │ jenkins │ v1.37.0 │ 09 Nov 25 13:45 UTC │ 09 Nov 25 13:45 UTC │
	│ cache   │ list                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 09 Nov 25 13:45 UTC │ 09 Nov 25 13:45 UTC │
	│ ssh     │ functional-419649 ssh sudo crictl images                                                                    │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:45 UTC │ 09 Nov 25 13:45 UTC │
	│ ssh     │ functional-419649 ssh sudo crictl rmi registry.k8s.io/pause:latest                                          │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:45 UTC │ 09 Nov 25 13:45 UTC │
	│ ssh     │ functional-419649 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                     │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:45 UTC │                     │
	│ cache   │ functional-419649 cache reload                                                                              │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:45 UTC │ 09 Nov 25 13:45 UTC │
	│ ssh     │ functional-419649 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                     │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:45 UTC │ 09 Nov 25 13:45 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                            │ minikube          │ jenkins │ v1.37.0 │ 09 Nov 25 13:45 UTC │ 09 Nov 25 13:45 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                         │ minikube          │ jenkins │ v1.37.0 │ 09 Nov 25 13:45 UTC │ 09 Nov 25 13:45 UTC │
	│ kubectl │ functional-419649 kubectl -- --context functional-419649 get pods                                           │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:45 UTC │ 09 Nov 25 13:45 UTC │
	│ start   │ -p functional-419649 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all    │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:45 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 13:45:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 13:45:02.847094  559203 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:45:02.847360  559203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:45:02.847364  559203 out.go:374] Setting ErrFile to fd 2...
	I1109 13:45:02.847367  559203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:45:02.847592  559203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
	I1109 13:45:02.848126  559203 out.go:368] Setting JSON to false
	I1109 13:45:02.849095  559203 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":70052,"bootTime":1762625851,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 13:45:02.849202  559203 start.go:143] virtualization: kvm guest
	I1109 13:45:02.851348  559203 out.go:179] * [functional-419649] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 13:45:02.852776  559203 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 13:45:02.852780  559203 notify.go:221] Checking for updates...
	I1109 13:45:02.855401  559203 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:45:02.856817  559203 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-549598/kubeconfig
	I1109 13:45:02.858238  559203 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 13:45:02.859590  559203 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 13:45:02.861028  559203 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 13:45:02.862871  559203 config.go:182] Loaded profile config "functional-419649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:45:02.862972  559203 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:45:02.900990  559203 out.go:179] * Using the kvm2 driver based on existing profile
	I1109 13:45:02.902194  559203 start.go:309] selected driver: kvm2
	I1109 13:45:02.902203  559203 start.go:930] validating driver "kvm2" against &{Name:functional-419649 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-419649 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:45:02.902347  559203 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 13:45:02.903921  559203 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 13:45:02.903979  559203 cni.go:84] Creating CNI manager for ""
	I1109 13:45:02.904062  559203 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1109 13:45:02.904148  559203 start.go:353] cluster config:
	{Name:functional-419649 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-419649 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:45:02.904275  559203 iso.go:125] acquiring lock: {Name:mk8f0547b4600c0b2d1e831f024251df19b55199 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 13:45:02.906150  559203 out.go:179] * Starting "functional-419649" primary control-plane node in "functional-419649" cluster
	I1109 13:45:02.907462  559203 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:45:02.907493  559203 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-549598/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1109 13:45:02.907509  559203 cache.go:65] Caching tarball of preloaded images
	I1109 13:45:02.907635  559203 preload.go:238] Found /home/jenkins/minikube-integration/21139-549598/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1109 13:45:02.907643  559203 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 13:45:02.907737  559203 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/config.json ...
	I1109 13:45:02.907975  559203 start.go:360] acquireMachinesLock for functional-419649: {Name:mkb2a0b4add9a99b18ce9ab72b74eb5b4fda0e0a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1109 13:45:02.908018  559203 start.go:364] duration metric: took 28.011µs to acquireMachinesLock for "functional-419649"
	I1109 13:45:02.908028  559203 start.go:96] Skipping create...Using existing machine configuration
	I1109 13:45:02.908032  559203 fix.go:54] fixHost starting: 
	I1109 13:45:02.910029  559203 fix.go:112] recreateIfNeeded on functional-419649: state=Running err=<nil>
	W1109 13:45:02.910053  559203 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 13:45:02.911640  559203 out.go:252] * Updating the running kvm2 "functional-419649" VM ...
	I1109 13:45:02.911674  559203 machine.go:94] provisionDockerMachine start ...
	I1109 13:45:02.914748  559203 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:02.915319  559203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
	I1109 13:45:02.915340  559203 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:02.915529  559203 main.go:143] libmachine: Using SSH client type: native
	I1109 13:45:02.915758  559203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1109 13:45:02.915763  559203 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 13:45:03.027083  559203 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-419649
	
	I1109 13:45:03.027113  559203 buildroot.go:166] provisioning hostname "functional-419649"
	I1109 13:45:03.030945  559203 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:03.031447  559203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
	I1109 13:45:03.031465  559203 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:03.031666  559203 main.go:143] libmachine: Using SSH client type: native
	I1109 13:45:03.031990  559203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1109 13:45:03.032002  559203 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-419649 && echo "functional-419649" | sudo tee /etc/hostname
	I1109 13:45:03.164922  559203 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-419649
	
	I1109 13:45:03.168188  559203 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:03.168636  559203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
	I1109 13:45:03.168669  559203 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:03.168894  559203 main.go:143] libmachine: Using SSH client type: native
	I1109 13:45:03.169112  559203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1109 13:45:03.169123  559203 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-419649' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-419649/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-419649' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 13:45:03.289760  559203 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 13:45:03.289832  559203 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21139-549598/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-549598/.minikube}
	I1109 13:45:03.289856  559203 buildroot.go:174] setting up certificates
	I1109 13:45:03.289869  559203 provision.go:84] configureAuth start
	I1109 13:45:03.295306  559203 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:03.295926  559203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
	I1109 13:45:03.295953  559203 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:03.299991  559203 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:03.300685  559203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
	I1109 13:45:03.300716  559203 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:03.301020  559203 provision.go:143] copyHostCerts
	I1109 13:45:03.301095  559203 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-549598/.minikube/cert.pem, removing ...
	I1109 13:45:03.301124  559203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-549598/.minikube/cert.pem
	I1109 13:45:03.301244  559203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-549598/.minikube/cert.pem (1123 bytes)
	I1109 13:45:03.301400  559203 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-549598/.minikube/key.pem, removing ...
	I1109 13:45:03.301405  559203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-549598/.minikube/key.pem
	I1109 13:45:03.301437  559203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-549598/.minikube/key.pem (1679 bytes)
	I1109 13:45:03.301489  559203 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-549598/.minikube/ca.pem, removing ...
	I1109 13:45:03.301492  559203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-549598/.minikube/ca.pem
	I1109 13:45:03.301515  559203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-549598/.minikube/ca.pem (1082 bytes)
	I1109 13:45:03.301590  559203 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-549598/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca-key.pem org=jenkins.functional-419649 san=[127.0.0.1 192.168.39.90 functional-419649 localhost minikube]
	I1109 13:45:03.484770  559203 provision.go:177] copyRemoteCerts
	I1109 13:45:03.484840  559203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 13:45:03.488430  559203 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:03.489084  559203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
	I1109 13:45:03.489108  559203 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:03.489346  559203 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/functional-419649/id_rsa Username:docker}
	I1109 13:45:03.577560  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 13:45:03.625368  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1109 13:45:03.666066  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 13:45:03.705166  559203 provision.go:87] duration metric: took 415.28178ms to configureAuth
	I1109 13:45:03.705190  559203 buildroot.go:189] setting minikube options for container-runtime
	I1109 13:45:03.705397  559203 config.go:182] Loaded profile config "functional-419649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:45:03.709087  559203 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:03.709610  559203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
	I1109 13:45:03.709639  559203 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:03.709881  559203 main.go:143] libmachine: Using SSH client type: native
	I1109 13:45:03.710159  559203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1109 13:45:03.710168  559203 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 13:45:09.565045  559203 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 13:45:09.565064  559203 machine.go:97] duration metric: took 6.65338378s to provisionDockerMachine
	I1109 13:45:09.565075  559203 start.go:293] postStartSetup for "functional-419649" (driver="kvm2")
	I1109 13:45:09.565084  559203 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 13:45:09.565159  559203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 13:45:09.568571  559203 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:09.569078  559203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
	I1109 13:45:09.569096  559203 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:09.569287  559203 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/functional-419649/id_rsa Username:docker}
	I1109 13:45:09.657920  559203 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 13:45:09.664193  559203 info.go:137] Remote host: Buildroot 2025.02
	I1109 13:45:09.664221  559203 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-549598/.minikube/addons for local assets ...
	I1109 13:45:09.664303  559203 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-549598/.minikube/files for local assets ...
	I1109 13:45:09.664376  559203 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem -> 5534732.pem in /etc/ssl/certs
	I1109 13:45:09.664442  559203 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/test/nested/copy/553473/hosts -> hosts in /etc/test/nested/copy/553473
	I1109 13:45:09.664479  559203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/553473
	I1109 13:45:09.679696  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem --> /etc/ssl/certs/5534732.pem (1708 bytes)
	I1109 13:45:09.716342  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/test/nested/copy/553473/hosts --> /etc/test/nested/copy/553473/hosts (40 bytes)
	I1109 13:45:09.752619  559203 start.go:296] duration metric: took 187.524618ms for postStartSetup
	I1109 13:45:09.752668  559203 fix.go:56] duration metric: took 6.844630089s for fixHost
	I1109 13:45:09.756013  559203 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:09.756422  559203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
	I1109 13:45:09.756436  559203 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:09.756616  559203 main.go:143] libmachine: Using SSH client type: native
	I1109 13:45:09.756838  559203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1109 13:45:09.756844  559203 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1109 13:45:09.930120  559203 main.go:143] libmachine: SSH cmd err, output: <nil>: 1762695909.924449478
	
	I1109 13:45:09.930138  559203 fix.go:216] guest clock: 1762695909.924449478
	I1109 13:45:09.930149  559203 fix.go:229] Guest: 2025-11-09 13:45:09.924449478 +0000 UTC Remote: 2025-11-09 13:45:09.752671487 +0000 UTC m=+6.963111750 (delta=171.777991ms)
	I1109 13:45:09.930174  559203 fix.go:200] guest clock delta is within tolerance: 171.777991ms
	I1109 13:45:09.930181  559203 start.go:83] releasing machines lock for "functional-419649", held for 7.02215711s
	I1109 13:45:09.934331  559203 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:09.934820  559203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
	I1109 13:45:09.934841  559203 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:09.935708  559203 ssh_runner.go:195] Run: cat /version.json
	I1109 13:45:09.935808  559203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 13:45:09.939805  559203 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:09.940152  559203 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:09.940368  559203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
	I1109 13:45:09.940387  559203 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:09.940593  559203 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/functional-419649/id_rsa Username:docker}
	I1109 13:45:09.940627  559203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
	I1109 13:45:09.940648  559203 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:09.940989  559203 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/functional-419649/id_rsa Username:docker}
	I1109 13:45:10.137772  559203 ssh_runner.go:195] Run: systemctl --version
	I1109 13:45:10.160746  559203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 13:45:10.404731  559203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 13:45:10.418925  559203 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 13:45:10.418988  559203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 13:45:10.443481  559203 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 13:45:10.443505  559203 start.go:496] detecting cgroup driver to use...
	I1109 13:45:10.443575  559203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 13:45:10.489331  559203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 13:45:10.536067  559203 docker.go:218] disabling cri-docker service (if available) ...
	I1109 13:45:10.536142  559203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 13:45:10.588486  559203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 13:45:10.639617  559203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 13:45:10.987138  559203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 13:45:11.366704  559203 docker.go:234] disabling docker service ...
	I1109 13:45:11.366776  559203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 13:45:11.424407  559203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 13:45:11.463825  559203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 13:45:11.677106  559203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 13:45:11.878848  559203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 13:45:11.899452  559203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 13:45:11.928471  559203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 13:45:11.928537  559203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:45:11.942712  559203 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 13:45:11.942781  559203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:45:11.957392  559203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:45:11.972849  559203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:45:11.988110  559203 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 13:45:12.003608  559203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:45:12.018585  559203 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:45:12.034850  559203 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:45:12.050680  559203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 13:45:12.063559  559203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 13:45:12.077271  559203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:45:12.286219  559203 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 13:46:42.836165  559203 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.549902913s)
	I1109 13:46:42.836225  559203 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 13:46:42.836306  559203 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 13:46:42.844497  559203 start.go:564] Will wait 60s for crictl version
	I1109 13:46:42.844561  559203 ssh_runner.go:195] Run: which crictl
	I1109 13:46:42.850785  559203 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1109 13:46:42.903352  559203 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1109 13:46:42.903428  559203 ssh_runner.go:195] Run: crio --version
	I1109 13:46:42.940290  559203 ssh_runner.go:195] Run: crio --version
	I1109 13:46:42.978442  559203 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1109 13:46:42.983300  559203 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:46:42.983986  559203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
	I1109 13:46:42.984012  559203 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:46:42.984294  559203 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1109 13:46:42.992598  559203 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1109 13:46:42.993996  559203 kubeadm.go:884] updating cluster {Name:functional-419649 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:functional-419649 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 13:46:42.994171  559203 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:46:42.994242  559203 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:46:43.052232  559203 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 13:46:43.052245  559203 crio.go:433] Images already preloaded, skipping extraction
	I1109 13:46:43.052301  559203 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:46:43.097545  559203 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 13:46:43.097560  559203 cache_images.go:86] Images are preloaded, skipping loading
	I1109 13:46:43.097567  559203 kubeadm.go:935] updating node { 192.168.39.90 8441 v1.34.1 crio true true} ...
	I1109 13:46:43.097684  559203 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-419649 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-419649 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 13:46:43.097760  559203 ssh_runner.go:195] Run: crio config
	I1109 13:46:43.159738  559203 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1109 13:46:43.159777  559203 cni.go:84] Creating CNI manager for ""
	I1109 13:46:43.159788  559203 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1109 13:46:43.159823  559203 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 13:46:43.159857  559203 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-419649 NodeName:functional-419649 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts
:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 13:46:43.160032  559203 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-419649"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.90"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 13:46:43.160133  559203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 13:46:43.176463  559203 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 13:46:43.176572  559203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 13:46:43.192722  559203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1109 13:46:43.221827  559203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 13:46:43.251089  559203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2067 bytes)
	I1109 13:46:43.280865  559203 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I1109 13:46:43.287562  559203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:46:43.490085  559203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:46:43.513249  559203 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649 for IP: 192.168.39.90
	I1109 13:46:43.513264  559203 certs.go:195] generating shared ca certs ...
	I1109 13:46:43.513284  559203 certs.go:227] acquiring lock for ca certs: {Name:mkc766226c1ec8ac0cc61519ae61374bb0aa3b29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:46:43.513562  559203 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key
	I1109 13:46:43.513603  559203 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key
	I1109 13:46:43.513617  559203 certs.go:257] generating profile certs ...
	I1109 13:46:43.513730  559203 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.key
	I1109 13:46:43.513775  559203 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/apiserver.key.6dc4be3b
	I1109 13:46:43.513839  559203 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/proxy-client.key
	I1109 13:46:43.513949  559203 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/553473.pem (1338 bytes)
	W1109 13:46:43.513987  559203 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-549598/.minikube/certs/553473_empty.pem, impossibly tiny 0 bytes
	I1109 13:46:43.513993  559203 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca-key.pem (1675 bytes)
	I1109 13:46:43.514012  559203 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem (1082 bytes)
	I1109 13:46:43.514030  559203 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem (1123 bytes)
	I1109 13:46:43.514054  559203 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/key.pem (1679 bytes)
	I1109 13:46:43.514103  559203 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem (1708 bytes)
	I1109 13:46:43.514909  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 13:46:43.553049  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 13:46:43.590481  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 13:46:43.627199  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 13:46:43.666998  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1109 13:46:43.704574  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 13:46:43.741698  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 13:46:43.778748  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 13:46:43.817457  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/certs/553473.pem --> /usr/share/ca-certificates/553473.pem (1338 bytes)
	I1109 13:46:43.854768  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem --> /usr/share/ca-certificates/5534732.pem (1708 bytes)
	I1109 13:46:43.893208  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 13:46:43.930631  559203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 13:46:43.960296  559203 ssh_runner.go:195] Run: openssl version
	I1109 13:46:43.969312  559203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 13:46:43.989417  559203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:46:43.997651  559203 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:46:43.997781  559203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:46:44.010071  559203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 13:46:44.028511  559203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/553473.pem && ln -fs /usr/share/ca-certificates/553473.pem /etc/ssl/certs/553473.pem"
	I1109 13:46:44.048127  559203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/553473.pem
	I1109 13:46:44.056172  559203 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:42 /usr/share/ca-certificates/553473.pem
	I1109 13:46:44.056246  559203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/553473.pem
	I1109 13:46:44.066875  559203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/553473.pem /etc/ssl/certs/51391683.0"
	I1109 13:46:44.082676  559203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5534732.pem && ln -fs /usr/share/ca-certificates/5534732.pem /etc/ssl/certs/5534732.pem"
	I1109 13:46:44.098817  559203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5534732.pem
	I1109 13:46:44.106669  559203 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:42 /usr/share/ca-certificates/5534732.pem
	I1109 13:46:44.106742  559203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5534732.pem
	I1109 13:46:44.115568  559203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5534732.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 13:46:44.129556  559203 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 13:46:44.136177  559203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 13:46:44.145148  559203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 13:46:44.154077  559203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 13:46:44.163010  559203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 13:46:44.171974  559203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 13:46:44.180832  559203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 13:46:44.190451  559203 kubeadm.go:401] StartCluster: {Name:functional-419649 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:functional-419649 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:46:44.190535  559203 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:46:44.190620  559203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:46:44.239542  559203 cri.go:89] found id: "5c23c19796791474d9cbefd6426cedf9b06ba08c535f77b519664e5d5fe0e1da"
	I1109 13:46:44.239558  559203 cri.go:89] found id: "dabded828a263500488aec4eed71cc058df72adef2bcdb48a96af101f8296650"
	I1109 13:46:44.239562  559203 cri.go:89] found id: "6678989530f542c497fe013e487fb90f58d077213ceab56369a11083ac93d8aa"
	I1109 13:46:44.239564  559203 cri.go:89] found id: "a7ec1c8b227e612c946139a2114c20f15d630c80f99ada219f51c961793c721b"
	I1109 13:46:44.239566  559203 cri.go:89] found id: "455a05faf8a4dc9342caa1ccb9a3986f838c476148641b63fc61957a6a5f78d3"
	I1109 13:46:44.239568  559203 cri.go:89] found id: "9251976d9d7b91c3dd81cc2547a4867d13c44581df4bf007267d504fee90a86e"
	I1109 13:46:44.239569  559203 cri.go:89] found id: "3c5b8301c397c59a54b37b9851ca454244b8fbec40b66bf02f5df0283be65347"
	I1109 13:46:44.239571  559203 cri.go:89] found id: "aba9dc19a06ff98bdfe68ee5b389ed2498b2d1b0320879106ffc77cd914731ac"
	I1109 13:46:44.239573  559203 cri.go:89] found id: ""
	I1109 13:46:44.239626  559203 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-419649 -n functional-419649
helpers_test.go:269: (dbg) Run:  kubectl --context functional-419649 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestFunctional/serial/ExtraConfig FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/ExtraConfig (355.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-419649 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-419649 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-419649 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-419649 --alsologtostderr -v=1] stderr:
I1109 13:51:39.991766  561590 out.go:360] Setting OutFile to fd 1 ...
I1109 13:51:39.992071  561590 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:51:39.992080  561590 out.go:374] Setting ErrFile to fd 2...
I1109 13:51:39.992085  561590 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:51:39.992297  561590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
I1109 13:51:39.992653  561590 mustload.go:66] Loading cluster: functional-419649
I1109 13:51:39.993053  561590 config.go:182] Loaded profile config "functional-419649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:51:39.995391  561590 host.go:66] Checking if "functional-419649" exists ...
I1109 13:51:39.995708  561590 api_server.go:166] Checking apiserver status ...
I1109 13:51:39.995783  561590 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1109 13:51:39.998894  561590 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
I1109 13:51:39.999463  561590 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
I1109 13:51:39.999502  561590 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
I1109 13:51:39.999712  561590 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/functional-419649/id_rsa Username:docker}
I1109 13:51:40.102870  561590 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6656/cgroup
W1109 13:51:40.118462  561590 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6656/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1109 13:51:40.118542  561590 ssh_runner.go:195] Run: ls
I1109 13:51:40.126843  561590 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8441/healthz ...
I1109 13:51:40.134535  561590 api_server.go:279] https://192.168.39.90:8441/healthz returned 200:
ok
W1109 13:51:40.134599  561590 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1109 13:51:40.134785  561590 config.go:182] Loaded profile config "functional-419649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:51:40.134823  561590 addons.go:70] Setting dashboard=true in profile "functional-419649"
I1109 13:51:40.134830  561590 addons.go:239] Setting addon dashboard=true in "functional-419649"
I1109 13:51:40.134858  561590 host.go:66] Checking if "functional-419649" exists ...
I1109 13:51:40.138660  561590 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1109 13:51:40.140083  561590 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1109 13:51:40.141265  561590 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1109 13:51:40.141295  561590 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1109 13:51:40.144434  561590 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
I1109 13:51:40.145165  561590 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
I1109 13:51:40.145207  561590 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
I1109 13:51:40.145435  561590 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/functional-419649/id_rsa Username:docker}
I1109 13:51:40.255513  561590 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1109 13:51:40.255546  561590 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1109 13:51:40.284481  561590 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1109 13:51:40.284514  561590 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1109 13:51:40.313064  561590 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1109 13:51:40.313111  561590 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1109 13:51:40.341233  561590 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1109 13:51:40.341258  561590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1109 13:51:40.368215  561590 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1109 13:51:40.368245  561590 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1109 13:51:40.396449  561590 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1109 13:51:40.396498  561590 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1109 13:51:40.425581  561590 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1109 13:51:40.425611  561590 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1109 13:51:40.452711  561590 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1109 13:51:40.452745  561590 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1109 13:51:40.481338  561590 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1109 13:51:40.481371  561590 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1109 13:51:40.508350  561590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1109 13:51:41.401164  561590 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-419649 addons enable metrics-server

                                                
                                                
I1109 13:51:41.402625  561590 addons.go:202] Writing out "functional-419649" config to set dashboard=true...
W1109 13:51:41.402977  561590 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1109 13:51:41.403719  561590 kapi.go:59] client config for functional-419649: &rest.Config{Host:"https://192.168.39.90:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.key", CAFile:"/home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1109 13:51:41.404314  561590 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1109 13:51:41.404337  561590 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1109 13:51:41.404347  561590 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1109 13:51:41.404354  561590 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1109 13:51:41.404358  561590 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1109 13:51:41.416931  561590 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  a6387ac6-7bda-4ac7-a2a7-838576f59573 1047 0 2025-11-09 13:51:41 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-11-09 13:51:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.108.76.215,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.108.76.215],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1109 13:51:41.417110  561590 out.go:285] * Launching proxy ...
* Launching proxy ...
I1109 13:51:41.417199  561590 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-419649 proxy --port 36195]
I1109 13:51:41.417674  561590 dashboard.go:159] Waiting for kubectl to output host:port ...
I1109 13:51:41.466105  561590 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1109 13:51:41.466200  561590 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1109 13:51:41.476344  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[720a6e61-b8ee-4572-a77d-0991a149430a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:41 GMT]] Body:0xc0017827c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206280 TLS:<nil>}
I1109 13:51:41.476465  561590 retry.go:31] will retry after 138.277µs: Temporary Error: unexpected response code: 503
I1109 13:51:41.481400  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ac303afc-8db0-4947-98f2-d968d22c5b38] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:41 GMT]] Body:0xc001616300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000158b40 TLS:<nil>}
I1109 13:51:41.481507  561590 retry.go:31] will retry after 201.648µs: Temporary Error: unexpected response code: 503
I1109 13:51:41.487087  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[69caa107-d8e8-4f7c-ad77-89e00c2c29c1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:41 GMT]] Body:0xc00151fe80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002063c0 TLS:<nil>}
I1109 13:51:41.487164  561590 retry.go:31] will retry after 217.855µs: Temporary Error: unexpected response code: 503
I1109 13:51:41.491659  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d7b56c5b-7caf-4e1b-9560-09e7e01ef214] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:41 GMT]] Body:0xc0017828c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003717c0 TLS:<nil>}
I1109 13:51:41.491728  561590 retry.go:31] will retry after 434.199µs: Temporary Error: unexpected response code: 503
I1109 13:51:41.496200  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[57ee90f9-c3f9-499b-9b8a-59d5099c26db] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:41 GMT]] Body:0xc0016da040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000158c80 TLS:<nil>}
I1109 13:51:41.496269  561590 retry.go:31] will retry after 394.053µs: Temporary Error: unexpected response code: 503
I1109 13:51:41.501072  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f4362c32-295c-4e5c-b4d7-3e889d667ab8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:41 GMT]] Body:0xc0017829c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000371900 TLS:<nil>}
I1109 13:51:41.501167  561590 retry.go:31] will retry after 631.184µs: Temporary Error: unexpected response code: 503
I1109 13:51:41.505774  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6d2ed9b1-89d7-4be4-96bd-96cb32fe8db4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:41 GMT]] Body:0xc0016da140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000158dc0 TLS:<nil>}
I1109 13:51:41.505876  561590 retry.go:31] will retry after 1.653276ms: Temporary Error: unexpected response code: 503
I1109 13:51:41.511530  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[938fc6df-0102-4445-8312-a2f9086bb5a8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:41 GMT]] Body:0xc001782ac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000371a40 TLS:<nil>}
I1109 13:51:41.511604  561590 retry.go:31] will retry after 1.136227ms: Temporary Error: unexpected response code: 503
I1109 13:51:41.517476  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6d286347-5514-4546-a2b0-ec4877f50042] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:41 GMT]] Body:0xc001782b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000158f00 TLS:<nil>}
I1109 13:51:41.517553  561590 retry.go:31] will retry after 2.272639ms: Temporary Error: unexpected response code: 503
I1109 13:51:41.525775  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[16260e23-d3ba-4e4f-8cee-28a5d796eba4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:41 GMT]] Body:0xc0016da240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000159040 TLS:<nil>}
I1109 13:51:41.525874  561590 retry.go:31] will retry after 4.044822ms: Temporary Error: unexpected response code: 503
I1109 13:51:41.533743  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[942ecd58-1a73-434d-bd01-ce59aa47cdea] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:41 GMT]] Body:0xc001782c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000371b80 TLS:<nil>}
I1109 13:51:41.533841  561590 retry.go:31] will retry after 3.615516ms: Temporary Error: unexpected response code: 503
I1109 13:51:41.541784  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c540061f-3b69-44b9-a716-64c12d100fc0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:41 GMT]] Body:0xc001616440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000159180 TLS:<nil>}
I1109 13:51:41.541875  561590 retry.go:31] will retry after 9.180241ms: Temporary Error: unexpected response code: 503
I1109 13:51:41.555492  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c5f7247d-b143-4566-abf7-ceafdb0c5527] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:41 GMT]] Body:0xc001782d40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206500 TLS:<nil>}
I1109 13:51:41.555575  561590 retry.go:31] will retry after 7.721687ms: Temporary Error: unexpected response code: 503
I1109 13:51:41.567536  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e267271e-da8e-41c1-a4d5-748ae98af6ad] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:41 GMT]] Body:0xc0016da380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001592c0 TLS:<nil>}
I1109 13:51:41.567623  561590 retry.go:31] will retry after 10.97443ms: Temporary Error: unexpected response code: 503
I1109 13:51:41.593409  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f9681ee5-50c7-456b-a3df-960d24108cfe] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:41 GMT]] Body:0xc001782e40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000371cc0 TLS:<nil>}
I1109 13:51:41.593511  561590 retry.go:31] will retry after 16.695366ms: Temporary Error: unexpected response code: 503
I1109 13:51:41.621067  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cc7afa6d-3a5a-4c20-b2fe-561c56a62d62] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:41 GMT]] Body:0xc001616540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000159400 TLS:<nil>}
I1109 13:51:41.621156  561590 retry.go:31] will retry after 32.125173ms: Temporary Error: unexpected response code: 503
I1109 13:51:41.661820  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[86099895-cdb6-4a1e-915e-b14f351814e2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:41 GMT]] Body:0xc0016da4c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206640 TLS:<nil>}
I1109 13:51:41.661908  561590 retry.go:31] will retry after 83.809862ms: Temporary Error: unexpected response code: 503
I1109 13:51:41.752398  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[963dd1f0-0346-447c-ab37-04278d43e258] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:41 GMT]] Body:0xc001782f00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000371e00 TLS:<nil>}
I1109 13:51:41.752517  561590 retry.go:31] will retry after 88.83577ms: Temporary Error: unexpected response code: 503
I1109 13:51:41.847785  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b9c5dfae-cedd-444a-89f8-165ab3b16386] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:41 GMT]] Body:0xc001616640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000159540 TLS:<nil>}
I1109 13:51:41.847920  561590 retry.go:31] will retry after 142.484211ms: Temporary Error: unexpected response code: 503
I1109 13:51:41.999247  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0d28e58e-407e-4844-8ea9-c2aadca8405b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:41 GMT]] Body:0xc001783000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206780 TLS:<nil>}
I1109 13:51:41.999334  561590 retry.go:31] will retry after 184.106807ms: Temporary Error: unexpected response code: 503
I1109 13:51:42.187896  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b18e064d-e374-48c2-8c31-ee2ef6b9bbb4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:42 GMT]] Body:0xc0017830c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000159680 TLS:<nil>}
I1109 13:51:42.187970  561590 retry.go:31] will retry after 204.901275ms: Temporary Error: unexpected response code: 503
I1109 13:51:42.397375  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c7dcb075-18e2-4b11-bfcf-d6a44cafb9be] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:42 GMT]] Body:0xc0016da5c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000159900 TLS:<nil>}
I1109 13:51:42.397455  561590 retry.go:31] will retry after 537.957896ms: Temporary Error: unexpected response code: 503
I1109 13:51:42.940883  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[412cfada-1aa2-4f4f-b965-748355946fbd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:42 GMT]] Body:0xc0016167c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00047e640 TLS:<nil>}
I1109 13:51:42.941000  561590 retry.go:31] will retry after 488.834469ms: Temporary Error: unexpected response code: 503
I1109 13:51:43.434775  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2b4de988-a82c-41ca-a725-d8f4eb1e8509] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:43 GMT]] Body:0xc001783180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002068c0 TLS:<nil>}
I1109 13:51:43.434864  561590 retry.go:31] will retry after 754.050602ms: Temporary Error: unexpected response code: 503
I1109 13:51:44.193481  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e882427f-4dc4-4454-bb62-121b14f90249] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:44 GMT]] Body:0xc001616880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000159a40 TLS:<nil>}
I1109 13:51:44.193558  561590 retry.go:31] will retry after 1.674519063s: Temporary Error: unexpected response code: 503
I1109 13:51:45.872334  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a26b6450-74c7-487f-87f3-9f181705e4c8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:45 GMT]] Body:0xc001783280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206a00 TLS:<nil>}
I1109 13:51:45.872426  561590 retry.go:31] will retry after 1.551069179s: Temporary Error: unexpected response code: 503
I1109 13:51:47.428908  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0c858100-edab-4cfe-8535-31e198ab5aef] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:47 GMT]] Body:0xc001616940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000159b80 TLS:<nil>}
I1109 13:51:47.428997  561590 retry.go:31] will retry after 2.686357027s: Temporary Error: unexpected response code: 503
I1109 13:51:50.119517  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a8d3537e-9f1d-4fd3-b6e3-5e06300be7ef] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:50 GMT]] Body:0xc001783380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206b40 TLS:<nil>}
I1109 13:51:50.119606  561590 retry.go:31] will retry after 5.93935361s: Temporary Error: unexpected response code: 503
I1109 13:51:56.063787  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fa11447a-2ca6-4e5c-b8cb-ced59005df34] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:51:56 GMT]] Body:0xc001616a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206c80 TLS:<nil>}
I1109 13:51:56.063890  561590 retry.go:31] will retry after 8.505570332s: Temporary Error: unexpected response code: 503
I1109 13:52:04.577330  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8e3072ad-089f-429d-8021-08f1f3d4afee] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:52:04 GMT]] Body:0xc001783440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000159cc0 TLS:<nil>}
I1109 13:52:04.577411  561590 retry.go:31] will retry after 14.278874481s: Temporary Error: unexpected response code: 503
I1109 13:52:18.865231  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[181de382-06e4-496b-bc9f-23bfc61aff40] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:52:18 GMT]] Body:0xc0016da780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206dc0 TLS:<nil>}
I1109 13:52:18.865324  561590 retry.go:31] will retry after 17.151466904s: Temporary Error: unexpected response code: 503
I1109 13:52:36.022154  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8cb9a531-48ab-4b8c-af4f-6990b8fdf989] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:52:36 GMT]] Body:0xc001783500 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00047e780 TLS:<nil>}
I1109 13:52:36.022238  561590 retry.go:31] will retry after 34.959904381s: Temporary Error: unexpected response code: 503
I1109 13:53:10.987025  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5de34288-445a-4804-bf03-f735be41fa40] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:53:10 GMT]] Body:0xc0016da8c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00047e8c0 TLS:<nil>}
I1109 13:53:10.987125  561590 retry.go:31] will retry after 41.294397457s: Temporary Error: unexpected response code: 503
I1109 13:53:52.287342  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[77df4dc3-8611-4c64-9ca2-c70ddbd88309] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:53:52 GMT]] Body:0xc0014e4040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000158000 TLS:<nil>}
I1109 13:53:52.287428  561590 retry.go:31] will retry after 39.89703754s: Temporary Error: unexpected response code: 503
I1109 13:54:32.188494  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[89db0e55-dd1b-46d9-a643-83649e02aaa3] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:54:32 GMT]] Body:0xc001616080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206f00 TLS:<nil>}
I1109 13:54:32.188583  561590 retry.go:31] will retry after 36.573922469s: Temporary Error: unexpected response code: 503
I1109 13:55:08.769326  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ae398623-8bba-4d71-a7a9-d5a4268efc8a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:55:08 GMT]] Body:0xc001616100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000158140 TLS:<nil>}
I1109 13:55:08.769419  561590 retry.go:31] will retry after 1m26.036117408s: Temporary Error: unexpected response code: 503
I1109 13:56:34.812811  561590 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8e56b671-08ae-4b0d-a249-8cdbb99f2edc] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 09 Nov 2025 13:56:34 GMT]] Body:0xc0016160c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207040 TLS:<nil>}
I1109 13:56:34.812919  561590 retry.go:31] will retry after 52.660065866s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-419649 -n functional-419649
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-419649 logs -n 25: (1.874906035s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-419649 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ ssh       │ functional-419649 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ ssh       │ functional-419649 ssh -- ls -la /mount-9p                                                                                         │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ ssh       │ functional-419649 ssh cat /mount-9p/test-1762696275541277107                                                                      │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ ssh       │ functional-419649 ssh sudo cat /etc/test/nested/copy/553473/hosts                                                                 │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ start     │ -p functional-419649 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio                           │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ start     │ -p functional-419649 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                     │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ start     │ -p functional-419649 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio                           │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-419649 --alsologtostderr -v=1                                                                    │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ ssh       │ functional-419649 ssh stat /mount-9p/created-by-test                                                                              │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ ssh       │ functional-419649 ssh stat /mount-9p/created-by-pod                                                                               │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ ssh       │ functional-419649 ssh sudo umount -f /mount-9p                                                                                    │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ mount     │ -p functional-419649 /tmp/TestFunctionalparallelMountCmdspecific-port2962691435/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ ssh       │ functional-419649 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ ssh       │ functional-419649 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ ssh       │ functional-419649 ssh -- ls -la /mount-9p                                                                                         │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ ssh       │ functional-419649 ssh sudo umount -f /mount-9p                                                                                    │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ ssh       │ functional-419649 ssh findmnt -T /mount1                                                                                          │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ mount     │ -p functional-419649 /tmp/TestFunctionalparallelMountCmdVerifyCleanup522304696/001:/mount3 --alsologtostderr -v=1                 │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ mount     │ -p functional-419649 /tmp/TestFunctionalparallelMountCmdVerifyCleanup522304696/001:/mount1 --alsologtostderr -v=1                 │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ mount     │ -p functional-419649 /tmp/TestFunctionalparallelMountCmdVerifyCleanup522304696/001:/mount2 --alsologtostderr -v=1                 │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ ssh       │ functional-419649 ssh findmnt -T /mount1                                                                                          │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ ssh       │ functional-419649 ssh findmnt -T /mount2                                                                                          │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ ssh       │ functional-419649 ssh findmnt -T /mount3                                                                                          │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ mount     │ -p functional-419649 --kill=true                                                                                                  │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 13:51:39
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 13:51:39.860979  561575 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:51:39.861103  561575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:51:39.861110  561575 out.go:374] Setting ErrFile to fd 2...
	I1109 13:51:39.861115  561575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:51:39.861532  561575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
	I1109 13:51:39.862180  561575 out.go:368] Setting JSON to false
	I1109 13:51:39.863220  561575 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":70449,"bootTime":1762625851,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 13:51:39.863355  561575 start.go:143] virtualization: kvm guest
	I1109 13:51:39.865116  561575 out.go:179] * [functional-419649] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1109 13:51:39.866506  561575 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 13:51:39.866542  561575 notify.go:221] Checking for updates...
	I1109 13:51:39.869030  561575 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:51:39.870218  561575 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-549598/kubeconfig
	I1109 13:51:39.871342  561575 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 13:51:39.872675  561575 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 13:51:39.873970  561575 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 13:51:39.875604  561575 config.go:182] Loaded profile config "functional-419649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:51:39.876177  561575 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:51:39.915932  561575 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1109 13:51:39.917245  561575 start.go:309] selected driver: kvm2
	I1109 13:51:39.917274  561575 start.go:930] validating driver "kvm2" against &{Name:functional-419649 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-419649 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:51:39.917426  561575 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 13:51:39.919670  561575 out.go:203] 
	W1109 13:51:39.920739  561575 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1109 13:51:39.921941  561575 out.go:203] 
	
	
	==> CRI-O <==
	Nov 09 13:56:40 functional-419649 crio[6056]: time="2025-11-09 13:56:40.929667461Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4bfeb1eaf4e15a5f6e37e7754b859374bfe1b44c20eebb4ebbf05b097e18c3c,PodSandboxId:a1a33c028bd43eb4e7eb2debd0b100c09d8c5ea5c4e6cb4a5f501c8e57bd2b9f,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1762696309056036951,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a19c92c2-78f7-4060-ac8a-b2554d1b04cb,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e33bb0e4b8eb41966938874d3c927e9012ef3133494f78fc3a0915e652a8cc,PodSandboxId:b4f6bbf257db3b8fbac0994f6910a975771946cefa56c4fcedb8bc1329d92bd5,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1762696269236248644,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-7q2d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2f5d2732-44a0-47cb-8c9d-f5e4b42d8bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2f786eb6996484feae9a6edf630d709e85075222b420eeacbd4f5bcdf42fe9,PodSandboxId:d343fe3deeb567979594b26e1177a2a5ab212e7e352b36c5d4d62140943587d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762696013461666904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wkwss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc2b7dd4-023d-4994-9237-fabeae6e63ce,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7de985bd6dba90bd0013a202ba648a3374b5e13c9eac145c1ff3a556ad18b11,PodSandboxId:55a203c6316403aa49a2955a0853041d78c5074838c3bb8ebaa82188ae2d598d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:176269601345111480
5,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zrw7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9011e98a-2a19-48e0-8e28-8bddfcffc50c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccfd82eb8e555e10a2eafc36a68aa554bade3da97f156fb8739c88f2a38cf0e,PodSandboxId:17039e2b70c72e705a132625c23567c3638c1b25f3af9b63ed1849d52da
a2b63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762696012872773465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae48a075-3b00-486c-b8b2-6b2080262987,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7004426713a395e63c67bf1a805682492109c2bf495d3246fcb0f28c29099d7,PodSandboxId:9b614ffdc2151d1580f4fc9869a755d2e41aff7bd3c05802acf032de23b91e7c,Metada
ta:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762696008010989457,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02798d3a566bdf1b79c9a1609aa8851,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeac0983c075eb11541843961479d16f1ab06804bc32ece9e1a68fd86e0050c7,PodSan
dboxId:c63f4500183b22de7cb690d64315a984561f054ec009c59cdf26b6427d6060b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762696008044381598,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274cf2193394c035a9ce4fd611eef33b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:f301eddabdc4748ec29a00b5d541088744c5ab8cc5777cea1ba92eb84008e179,PodSandboxId:b5a20367f799b5092d5b4a95d5392074f211426e75222c44241f261f7b05e763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762696007936737203,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b30b4238b2d99ce79cd53f17bb6da4,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd12d128087ae793d7d792e812c6f6ccca052788ddf81a1e90ea30b64ab4849,PodSandboxId:6bda3fdf294e3217b2c066d240d843a37e0880dcedbf78cb455f103455016be0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762696007910619648,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4c301aa576e79baa0710f6d51bb504,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostP
ort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c23c19796791474d9cbefd6426cedf9b06ba08c535f77b519664e5d5fe0e1da,PodSandboxId:cbb25c67e227ae6d14f5d31b745da29ae24e0b42896047793ff6795b34dbc3bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762695883942673163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wkwss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc2b7dd4-023d-4994-9237-fabeae6e63ce,},Annotations:map[string]string{io.kubernetes.
container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dabded828a263500488aec4eed71cc058df72adef2bcdb48a96af101f8296650,PodSandboxId:8575db975cef01eb89fd28f03c6fa8fdca33842a2041979b632356575beb2d34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa
6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762695883798388910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zrw7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9011e98a-2a19-48e0-8e28-8bddfcffc50c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ec1c8b227e612c946139a2114c20f15d630c80f99ada219f51c961
793c721b,PodSandboxId:209b0ec75c725d6d45ec9e42a7ba7a936de6699df431c66f39fcb3aec80a1ab1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1762695883066023200,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tw9jj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d037d4-fe85-43d4-8322-67e3cf4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6678989530f542c497fe013e487fb90f58d077213ceab56369a11083ac93d8aa,PodSandboxId:8cbe5cd0f783
4ad7420d084807c5915acd62cf10d4ddf89d1d29edf6f0936966,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1762695883122410369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae48a075-3b00-486c-b8b2-6b2080262987,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a05faf8a4dc9342caa1ccb9a3986f838c476148641b63fc61957a6a5f78d3,PodSandboxId:99b115d10596e4e2acea4b39e
1c2773d555e5bcd31c5829f19e770b5d584ef64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1762695878365041555,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274cf2193394c035a9ce4fd611eef33b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:9251976d9d7b91c3dd81cc2547a4867d13c44581df4bf007267d504fee90a86e,PodSandboxId:7d83e0a07a96bc1167687657e35bdf244ca76065332caccf2cbd9d4fe64ff29c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1762695878239061096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02798d3a566bdf1b79c9a1609aa8851,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termination
MessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c5b8301c397c59a54b37b9851ca454244b8fbec40b66bf02f5df0283be65347,PodSandboxId:783c1d6ed75e1986bb80f86c0a3ef9c7b34fe784b7c558e9e71a8dae6e90a890,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1762695878231412184,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b30b4238b2d99ce79cd53f17bb6da4,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=89b458f6-476d-49b9-be76-8ad03546dd3a name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:56:40 functional-419649 crio[6056]: time="2025-11-09 13:56:40.936900907Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ec37907-4522-4939-8a6b-d3bd198be6db name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 09 13:56:40 functional-419649 crio[6056]: time="2025-11-09 13:56:40.937254160Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7f4919b7937a2922f09f87ac5060cd46067c18623966619cce7115b4696dc954,Metadata:&PodSandboxMetadata{Name:mysql-5bb876957f-2vzsj,Uid:33dac2a4-0080-4332-bb7e-013b368be634,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762696314843857815,Labels:map[string]string{app: mysql,io.kubernetes.container.name: POD,io.kubernetes.pod.name: mysql-5bb876957f-2vzsj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33dac2a4-0080-4332-bb7e-013b368be634,pod-template-hash: 5bb876957f,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:51:54.518193957Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:70743887fa965292b06f699c5d9945d26f1dc2745675699fb2e7dad8d076c968,Metadata:&PodSandboxMetadata{Name:dashboard-metrics-scraper-77bf4d6c4c-nhqbg,Uid:0d93ae6c-8c15-4992-8d45-0638b27bc438,Namespace:kuberne
tes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762696301607345523,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-77bf4d6c4c-nhqbg,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 0d93ae6c-8c15-4992-8d45-0638b27bc438,k8s-app: dashboard-metrics-scraper,pod-template-hash: 77bf4d6c4c,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:51:41.261010416Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:6eb198c4cffbca66719f3a6f7930986376ecf49b7955ff5be8f35067230d6478,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-855c9754f9-9z7jc,Uid:f6a2902a-84a0-493d-9446-f6bb760e3b7c,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762696301565290317,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-855c9754f9-9z7jc,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: f6a2902a-84a0-493d-9446-f6bb760e3b7c,k8s-app: kubernetes-dashboard,pod-template-hash: 855c9754f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:51:41.239901995Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:00b3497f0fdf19fa2a5f6c29dc26d27f09d0d8f7bb980e911c927bb3db8db1a1,Metadata:&PodSandboxMetadata{Name:sp-pod,Uid:050d522c-0b3b-45e6-bcc9-4a75faca154f,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762696276071038784,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 050d522c-0b3b-45e6-bcc9-4a75faca154f,test: storage-provisioner,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"test\":\"storage-provisioner\"},\"name\":\"sp-pod\",\"namespace\":\"default\"},\"spec\":{\"containe
rs\":[{\"image\":\"docker.io/nginx\",\"name\":\"myfrontend\",\"volumeMounts\":[{\"mountPath\":\"/tmp/mount\",\"name\":\"mypd\"}]}],\"volumes\":[{\"name\":\"mypd\",\"persistentVolumeClaim\":{\"claimName\":\"myclaim\"}}]}}\n,kubernetes.io/config.seen: 2025-11-09T13:51:15.749123420Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b4f6bbf257db3b8fbac0994f6910a975771946cefa56c4fcedb8bc1329d92bd5,Metadata:&PodSandboxMetadata{Name:hello-node-connect-7d85dfc575-7q2d9,Uid:2f5d2732-44a0-47cb-8c9d-f5e4b42d8bf1,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762696268667530522,Labels:map[string]string{app: hello-node-connect,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-7q2d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2f5d2732-44a0-47cb-8c9d-f5e4b42d8bf1,pod-template-hash: 7d85dfc575,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:51:08.221032415Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSa
ndbox{Id:7f3a212c8880faf39cfef12ce56258bf1558e8dc8700ae8d7dc15b1a5bbc3e8d,Metadata:&PodSandboxMetadata{Name:hello-node-75c85bcc94-vgzbw,Uid:3b85306c-2aa1-4f2a-9f4f-b08d7fe54720,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762696267382882335,Labels:map[string]string{app: hello-node,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-75c85bcc94-vgzbw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b85306c-2aa1-4f2a-9f4f-b08d7fe54720,pod-template-hash: 75c85bcc94,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:51:07.042635274Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:55a203c6316403aa49a2955a0853041d78c5074838c3bb8ebaa82188ae2d598d,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-zrw7g,Uid:9011e98a-2a19-48e0-8e28-8bddfcffc50c,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1762696012528236825,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-zrw7
g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9011e98a-2a19-48e0-8e28-8bddfcffc50c,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:46:51.847651475Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d343fe3deeb567979594b26e1177a2a5ab212e7e352b36c5d4d62140943587d7,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-wkwss,Uid:cc2b7dd4-023d-4994-9237-fabeae6e63ce,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1762696012526305851,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-wkwss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc2b7dd4-023d-4994-9237-fabeae6e63ce,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:46:51.847649492Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:17039e2b70c72e705a132625c23567c3638c1b25f3af9b63ed1
849d52daa2b63,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ae48a075-3b00-486c-b8b2-6b2080262987,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1762696012213662677,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae48a075-3b00-486c-b8b2-6b2080262987,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mount
Path\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-11-09T13:46:51.847642606Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b5a20367f799b5092d5b4a95d5392074f211426e75222c44241f261f7b05e763,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-419649,Uid:62b30b4238b2d99ce79cd53f17bb6da4,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1762696007640998184,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b30b4238b2d99ce79cd53f17bb6da4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 62b30b4238b2d99ce79cd53f17bb6da4,kubernetes.io/config.seen: 2025-11-09T13:46:46.858356715Z,kubernetes.
io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c63f4500183b22de7cb690d64315a984561f054ec009c59cdf26b6427d6060b9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-419649,Uid:274cf2193394c035a9ce4fd611eef33b,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1762696007611556988,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274cf2193394c035a9ce4fd611eef33b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 274cf2193394c035a9ce4fd611eef33b,kubernetes.io/config.seen: 2025-11-09T13:46:46.858359081Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9b614ffdc2151d1580f4fc9869a755d2e41aff7bd3c05802acf032de23b91e7c,Metadata:&PodSandboxMetadata{Name:etcd-functional-419649,Uid:c02798d3a566bdf1b79c9a1609aa8851,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:176269600761
0266457,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02798d3a566bdf1b79c9a1609aa8851,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.90:2379,kubernetes.io/config.hash: c02798d3a566bdf1b79c9a1609aa8851,kubernetes.io/config.seen: 2025-11-09T13:46:46.858349640Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6bda3fdf294e3217b2c066d240d843a37e0880dcedbf78cb455f103455016be0,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-419649,Uid:8a4c301aa576e79baa0710f6d51bb504,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762696007588180325,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4c301aa576e79baa0710
f6d51bb504,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.90:8441,kubernetes.io/config.hash: 8a4c301aa576e79baa0710f6d51bb504,kubernetes.io/config.seen: 2025-11-09T13:46:46.858355297Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=4ec37907-4522-4939-8a6b-d3bd198be6db name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 09 13:56:40 functional-419649 crio[6056]: time="2025-11-09 13:56:40.938725639Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed46d493-dc61-4b38-ae4c-087ca1518ef7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:56:40 functional-419649 crio[6056]: time="2025-11-09 13:56:40.939544981Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed46d493-dc61-4b38-ae4c-087ca1518ef7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:56:40 functional-419649 crio[6056]: time="2025-11-09 13:56:40.939738242Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92e33bb0e4b8eb41966938874d3c927e9012ef3133494f78fc3a0915e652a8cc,PodSandboxId:b4f6bbf257db3b8fbac0994f6910a975771946cefa56c4fcedb8bc1329d92bd5,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1762696269236248644,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-7q2d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2f5d2732-44a0-47cb-8c9d-f5e4b42d8bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2f786eb6996484feae9a6edf630d709e85075222b420eeacbd4f5bcdf42fe9,PodSandboxId:d343fe3deeb567979594b26e1177a2a5ab212e7e352b36c5d4d62140943587d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762696013461666904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wkwss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc2b7dd4-023d-4994-9237-fabeae6e63ce,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"cont
ainerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7de985bd6dba90bd0013a202ba648a3374b5e13c9eac145c1ff3a556ad18b11,PodSandboxId:55a203c6316403aa49a2955a0853041d78c5074838c3bb8ebaa82188ae2d598d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762696013451114
805,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zrw7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9011e98a-2a19-48e0-8e28-8bddfcffc50c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccfd82eb8e555e10a2eafc36a68aa554bade3da97f156fb8739c88f2a38cf0e,PodSandboxId:17039e2b70c72e705a132625c23567c3638c1b25f3af9b63ed1849d52
daa2b63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762696012872773465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae48a075-3b00-486c-b8b2-6b2080262987,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7004426713a395e63c67bf1a805682492109c2bf495d3246fcb0f28c29099d7,PodSandboxId:9b614ffdc2151d1580f4fc9869a755d2e41aff7bd3c05802acf032de23b91e7c,Meta
data:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762696008010989457,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02798d3a566bdf1b79c9a1609aa8851,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeac0983c075eb11541843961479d16f1ab06804bc32ece9e1a68fd86e0050c7,PodS
andboxId:c63f4500183b22de7cb690d64315a984561f054ec009c59cdf26b6427d6060b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762696008044381598,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274cf2193394c035a9ce4fd611eef33b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.term
inationGracePeriod: 30,},},&Container{Id:f301eddabdc4748ec29a00b5d541088744c5ab8cc5777cea1ba92eb84008e179,PodSandboxId:b5a20367f799b5092d5b4a95d5392074f211426e75222c44241f261f7b05e763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762696007936737203,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b30b4238b2d99ce79cd53f17bb6da4,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd12d128087ae793d7d792e812c6f6ccca052788ddf81a1e90ea30b64ab4849,PodSandboxId:6bda3fdf294e3217b2c066d240d843a37e0880dcedbf78cb455f103455016be0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762696007910619648,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4c301aa576e79baa0710f6d51bb504,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ed46d493-dc61-4b38-ae4c-087ca1518ef7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:56:40 functional-419649 crio[6056]: time="2025-11-09 13:56:40.978655249Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4eb48924-0f5a-4752-8330-ec9a3204cd42 name=/runtime.v1.RuntimeService/Version
	Nov 09 13:56:40 functional-419649 crio[6056]: time="2025-11-09 13:56:40.978758422Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4eb48924-0f5a-4752-8330-ec9a3204cd42 name=/runtime.v1.RuntimeService/Version
	Nov 09 13:56:40 functional-419649 crio[6056]: time="2025-11-09 13:56:40.980529130Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d4e2bd8e-f122-49e2-bea4-ee854f56e07d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:56:40 functional-419649 crio[6056]: time="2025-11-09 13:56:40.981246562Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762696600981219935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177580,},InodesUsed:&UInt64Value{Value:89,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d4e2bd8e-f122-49e2-bea4-ee854f56e07d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:56:40 functional-419649 crio[6056]: time="2025-11-09 13:56:40.982366451Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7432b6bd-4f33-4fd4-99fe-b82657a5bb43 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:56:40 functional-419649 crio[6056]: time="2025-11-09 13:56:40.982496034Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7432b6bd-4f33-4fd4-99fe-b82657a5bb43 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:56:40 functional-419649 crio[6056]: time="2025-11-09 13:56:40.982914147Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4bfeb1eaf4e15a5f6e37e7754b859374bfe1b44c20eebb4ebbf05b097e18c3c,PodSandboxId:a1a33c028bd43eb4e7eb2debd0b100c09d8c5ea5c4e6cb4a5f501c8e57bd2b9f,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1762696309056036951,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a19c92c2-78f7-4060-ac8a-b2554d1b04cb,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e33bb0e4b8eb41966938874d3c927e9012ef3133494f78fc3a0915e652a8cc,PodSandboxId:b4f6bbf257db3b8fbac0994f6910a975771946cefa56c4fcedb8bc1329d92bd5,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1762696269236248644,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-7q2d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2f5d2732-44a0-47cb-8c9d-f5e4b42d8bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2f786eb6996484feae9a6edf630d709e85075222b420eeacbd4f5bcdf42fe9,PodSandboxId:d343fe3deeb567979594b26e1177a2a5ab212e7e352b36c5d4d62140943587d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762696013461666904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wkwss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc2b7dd4-023d-4994-9237-fabeae6e63ce,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7de985bd6dba90bd0013a202ba648a3374b5e13c9eac145c1ff3a556ad18b11,PodSandboxId:55a203c6316403aa49a2955a0853041d78c5074838c3bb8ebaa82188ae2d598d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:176269601345111480
5,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zrw7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9011e98a-2a19-48e0-8e28-8bddfcffc50c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccfd82eb8e555e10a2eafc36a68aa554bade3da97f156fb8739c88f2a38cf0e,PodSandboxId:17039e2b70c72e705a132625c23567c3638c1b25f3af9b63ed1849d52da
a2b63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762696012872773465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae48a075-3b00-486c-b8b2-6b2080262987,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7004426713a395e63c67bf1a805682492109c2bf495d3246fcb0f28c29099d7,PodSandboxId:9b614ffdc2151d1580f4fc9869a755d2e41aff7bd3c05802acf032de23b91e7c,Metada
ta:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762696008010989457,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02798d3a566bdf1b79c9a1609aa8851,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeac0983c075eb11541843961479d16f1ab06804bc32ece9e1a68fd86e0050c7,PodSan
dboxId:c63f4500183b22de7cb690d64315a984561f054ec009c59cdf26b6427d6060b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762696008044381598,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274cf2193394c035a9ce4fd611eef33b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:f301eddabdc4748ec29a00b5d541088744c5ab8cc5777cea1ba92eb84008e179,PodSandboxId:b5a20367f799b5092d5b4a95d5392074f211426e75222c44241f261f7b05e763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762696007936737203,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b30b4238b2d99ce79cd53f17bb6da4,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd12d128087ae793d7d792e812c6f6ccca052788ddf81a1e90ea30b64ab4849,PodSandboxId:6bda3fdf294e3217b2c066d240d843a37e0880dcedbf78cb455f103455016be0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762696007910619648,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4c301aa576e79baa0710f6d51bb504,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostP
ort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c23c19796791474d9cbefd6426cedf9b06ba08c535f77b519664e5d5fe0e1da,PodSandboxId:cbb25c67e227ae6d14f5d31b745da29ae24e0b42896047793ff6795b34dbc3bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762695883942673163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wkwss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc2b7dd4-023d-4994-9237-fabeae6e63ce,},Annotations:map[string]string{io.kubernetes.
container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dabded828a263500488aec4eed71cc058df72adef2bcdb48a96af101f8296650,PodSandboxId:8575db975cef01eb89fd28f03c6fa8fdca33842a2041979b632356575beb2d34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa
6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762695883798388910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zrw7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9011e98a-2a19-48e0-8e28-8bddfcffc50c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ec1c8b227e612c946139a2114c20f15d630c80f99ada219f51c961
793c721b,PodSandboxId:209b0ec75c725d6d45ec9e42a7ba7a936de6699df431c66f39fcb3aec80a1ab1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1762695883066023200,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tw9jj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d037d4-fe85-43d4-8322-67e3cf4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6678989530f542c497fe013e487fb90f58d077213ceab56369a11083ac93d8aa,PodSandboxId:8cbe5cd0f783
4ad7420d084807c5915acd62cf10d4ddf89d1d29edf6f0936966,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1762695883122410369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae48a075-3b00-486c-b8b2-6b2080262987,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a05faf8a4dc9342caa1ccb9a3986f838c476148641b63fc61957a6a5f78d3,PodSandboxId:99b115d10596e4e2acea4b39e
1c2773d555e5bcd31c5829f19e770b5d584ef64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1762695878365041555,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274cf2193394c035a9ce4fd611eef33b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:9251976d9d7b91c3dd81cc2547a4867d13c44581df4bf007267d504fee90a86e,PodSandboxId:7d83e0a07a96bc1167687657e35bdf244ca76065332caccf2cbd9d4fe64ff29c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1762695878239061096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02798d3a566bdf1b79c9a1609aa8851,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termination
MessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c5b8301c397c59a54b37b9851ca454244b8fbec40b66bf02f5df0283be65347,PodSandboxId:783c1d6ed75e1986bb80f86c0a3ef9c7b34fe784b7c558e9e71a8dae6e90a890,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1762695878231412184,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b30b4238b2d99ce79cd53f17bb6da4,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7432b6bd-4f33-4fd4-99fe-b82657a5bb43 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:56:41 functional-419649 crio[6056]: time="2025-11-09 13:56:41.000053524Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=2429e503-c128-4ccb-8bd2-79ef43159306 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 09 13:56:41 functional-419649 crio[6056]: time="2025-11-09 13:56:41.000497280Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7f4919b7937a2922f09f87ac5060cd46067c18623966619cce7115b4696dc954,Metadata:&PodSandboxMetadata{Name:mysql-5bb876957f-2vzsj,Uid:33dac2a4-0080-4332-bb7e-013b368be634,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762696314843857815,Labels:map[string]string{app: mysql,io.kubernetes.container.name: POD,io.kubernetes.pod.name: mysql-5bb876957f-2vzsj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33dac2a4-0080-4332-bb7e-013b368be634,pod-template-hash: 5bb876957f,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:51:54.518193957Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:70743887fa965292b06f699c5d9945d26f1dc2745675699fb2e7dad8d076c968,Metadata:&PodSandboxMetadata{Name:dashboard-metrics-scraper-77bf4d6c4c-nhqbg,Uid:0d93ae6c-8c15-4992-8d45-0638b27bc438,Namespace:kuberne
tes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762696301607345523,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-77bf4d6c4c-nhqbg,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 0d93ae6c-8c15-4992-8d45-0638b27bc438,k8s-app: dashboard-metrics-scraper,pod-template-hash: 77bf4d6c4c,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:51:41.261010416Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:6eb198c4cffbca66719f3a6f7930986376ecf49b7955ff5be8f35067230d6478,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-855c9754f9-9z7jc,Uid:f6a2902a-84a0-493d-9446-f6bb760e3b7c,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762696301565290317,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-855c9754f9-9z7jc,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: f6a2902a-84a0-493d-9446-f6bb760e3b7c,k8s-app: kubernetes-dashboard,pod-template-hash: 855c9754f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:51:41.239901995Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a1a33c028bd43eb4e7eb2debd0b100c09d8c5ea5c4e6cb4a5f501c8e57bd2b9f,Metadata:&PodSandboxMetadata{Name:busybox-mount,Uid:a19c92c2-78f7-4060-ac8a-b2554d1b04cb,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1762696277162739772,Labels:map[string]string{integration-test: busybox-mount,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a19c92c2-78f7-4060-ac8a-b2554d1b04cb,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:51:16.840249814Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:00b3497f0fdf19fa2a5f6c29dc26d27f09d0d8f7bb980e911c927bb3db8db1a1,Metadata:&PodSandboxMe
tadata{Name:sp-pod,Uid:050d522c-0b3b-45e6-bcc9-4a75faca154f,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762696276071038784,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 050d522c-0b3b-45e6-bcc9-4a75faca154f,test: storage-provisioner,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"test\":\"storage-provisioner\"},\"name\":\"sp-pod\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"docker.io/nginx\",\"name\":\"myfrontend\",\"volumeMounts\":[{\"mountPath\":\"/tmp/mount\",\"name\":\"mypd\"}]}],\"volumes\":[{\"name\":\"mypd\",\"persistentVolumeClaim\":{\"claimName\":\"myclaim\"}}]}}\n,kubernetes.io/config.seen: 2025-11-09T13:51:15.749123420Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b4f6bbf257db3b8fbac0994f6910a975771946cefa56c4fcedb8bc
1329d92bd5,Metadata:&PodSandboxMetadata{Name:hello-node-connect-7d85dfc575-7q2d9,Uid:2f5d2732-44a0-47cb-8c9d-f5e4b42d8bf1,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762696268667530522,Labels:map[string]string{app: hello-node-connect,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-7q2d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2f5d2732-44a0-47cb-8c9d-f5e4b42d8bf1,pod-template-hash: 7d85dfc575,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:51:08.221032415Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7f3a212c8880faf39cfef12ce56258bf1558e8dc8700ae8d7dc15b1a5bbc3e8d,Metadata:&PodSandboxMetadata{Name:hello-node-75c85bcc94-vgzbw,Uid:3b85306c-2aa1-4f2a-9f4f-b08d7fe54720,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762696267382882335,Labels:map[string]string{app: hello-node,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-75c85bcc94-vgzbw,io.kubernetes.pod.n
amespace: default,io.kubernetes.pod.uid: 3b85306c-2aa1-4f2a-9f4f-b08d7fe54720,pod-template-hash: 75c85bcc94,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:51:07.042635274Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:55a203c6316403aa49a2955a0853041d78c5074838c3bb8ebaa82188ae2d598d,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-zrw7g,Uid:9011e98a-2a19-48e0-8e28-8bddfcffc50c,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1762696012528236825,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-zrw7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9011e98a-2a19-48e0-8e28-8bddfcffc50c,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:46:51.847651475Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d343fe3deeb567979594b26e1177a2a5ab212e7e352b36c5d4d62140943587d7,Metadata:&PodSandboxMetadata{
Name:coredns-66bc5c9577-wkwss,Uid:cc2b7dd4-023d-4994-9237-fabeae6e63ce,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1762696012526305851,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-wkwss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc2b7dd4-023d-4994-9237-fabeae6e63ce,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:46:51.847649492Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:17039e2b70c72e705a132625c23567c3638c1b25f3af9b63ed1849d52daa2b63,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ae48a075-3b00-486c-b8b2-6b2080262987,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1762696012213662677,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: ae48a075-3b00-486c-b8b2-6b2080262987,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-11-09T13:46:51.847642606Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b5a20367f799b5092d5b4a95d5392074f211426e75222c44241f261f7b05e763,Metadata:&PodS
andboxMetadata{Name:kube-controller-manager-functional-419649,Uid:62b30b4238b2d99ce79cd53f17bb6da4,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1762696007640998184,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b30b4238b2d99ce79cd53f17bb6da4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 62b30b4238b2d99ce79cd53f17bb6da4,kubernetes.io/config.seen: 2025-11-09T13:46:46.858356715Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c63f4500183b22de7cb690d64315a984561f054ec009c59cdf26b6427d6060b9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-419649,Uid:274cf2193394c035a9ce4fd611eef33b,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1762696007611556988,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kube
rnetes.pod.name: kube-scheduler-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274cf2193394c035a9ce4fd611eef33b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 274cf2193394c035a9ce4fd611eef33b,kubernetes.io/config.seen: 2025-11-09T13:46:46.858359081Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9b614ffdc2151d1580f4fc9869a755d2e41aff7bd3c05802acf032de23b91e7c,Metadata:&PodSandboxMetadata{Name:etcd-functional-419649,Uid:c02798d3a566bdf1b79c9a1609aa8851,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1762696007610266457,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02798d3a566bdf1b79c9a1609aa8851,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.90:2379,kubernetes.io/config.hash: c02798d3a566bdf1b79c
9a1609aa8851,kubernetes.io/config.seen: 2025-11-09T13:46:46.858349640Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6bda3fdf294e3217b2c066d240d843a37e0880dcedbf78cb455f103455016be0,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-419649,Uid:8a4c301aa576e79baa0710f6d51bb504,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1762696007588180325,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4c301aa576e79baa0710f6d51bb504,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.90:8441,kubernetes.io/config.hash: 8a4c301aa576e79baa0710f6d51bb504,kubernetes.io/config.seen: 2025-11-09T13:46:46.858355297Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8575db975cef01eb89fd28f03c6fa8fdca33842a2041979b632356575beb2d34,Meta
data:&PodSandboxMetadata{Name:coredns-66bc5c9577-zrw7g,Uid:9011e98a-2a19-48e0-8e28-8bddfcffc50c,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1762695882893588827,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-zrw7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9011e98a-2a19-48e0-8e28-8bddfcffc50c,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:44:42.163724010Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cbb25c67e227ae6d14f5d31b745da29ae24e0b42896047793ff6795b34dbc3bd,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-wkwss,Uid:cc2b7dd4-023d-4994-9237-fabeae6e63ce,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1762695882829572657,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-wkwss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc
2b7dd4-023d-4994-9237-fabeae6e63ce,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:44:42.163722845Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:209b0ec75c725d6d45ec9e42a7ba7a936de6699df431c66f39fcb3aec80a1ab1,Metadata:&PodSandboxMetadata{Name:kube-proxy-tw9jj,Uid:64d037d4-fe85-43d4-8322-67e3cf4a7b89,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1762695882577432291,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tw9jj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d037d4-fe85-43d4-8322-67e3cf4a7b89,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:44:42.163713894Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8cbe5cd0f7834ad7420d084807c5915acd62cf10d4ddf89d1d29edf6f0936966,Metadata:&PodSandboxMetadata{Na
me:storage-provisioner,Uid:ae48a075-3b00-486c-b8b2-6b2080262987,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1762695882558758850,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae48a075-3b00-486c-b8b2-6b2080262987,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hos
tNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-11-09T13:44:42.163721297Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:99b115d10596e4e2acea4b39e1c2773d555e5bcd31c5829f19e770b5d584ef64,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-419649,Uid:274cf2193394c035a9ce4fd611eef33b,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1762695877941635855,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274cf2193394c035a9ce4fd611eef33b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 274cf2193394c035a9ce4fd611eef33b,kubernetes.io/config.seen: 2025-11-09T13:44:37.166314381Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:783c1d6e
d75e1986bb80f86c0a3ef9c7b34fe784b7c558e9e71a8dae6e90a890,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-419649,Uid:62b30b4238b2d99ce79cd53f17bb6da4,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1762695877925089753,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b30b4238b2d99ce79cd53f17bb6da4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 62b30b4238b2d99ce79cd53f17bb6da4,kubernetes.io/config.seen: 2025-11-09T13:44:37.166313646Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7d83e0a07a96bc1167687657e35bdf244ca76065332caccf2cbd9d4fe64ff29c,Metadata:&PodSandboxMetadata{Name:etcd-functional-419649,Uid:c02798d3a566bdf1b79c9a1609aa8851,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1762695877903744813,Labels:map[string]string{
component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02798d3a566bdf1b79c9a1609aa8851,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.90:2379,kubernetes.io/config.hash: c02798d3a566bdf1b79c9a1609aa8851,kubernetes.io/config.seen: 2025-11-09T13:44:37.166307571Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=2429e503-c128-4ccb-8bd2-79ef43159306 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 09 13:56:41 functional-419649 crio[6056]: time="2025-11-09 13:56:41.002420846Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c14454c5-ba28-4339-b584-5a0462c693a8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:56:41 functional-419649 crio[6056]: time="2025-11-09 13:56:41.002487150Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c14454c5-ba28-4339-b584-5a0462c693a8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:56:41 functional-419649 crio[6056]: time="2025-11-09 13:56:41.003554109Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4bfeb1eaf4e15a5f6e37e7754b859374bfe1b44c20eebb4ebbf05b097e18c3c,PodSandboxId:a1a33c028bd43eb4e7eb2debd0b100c09d8c5ea5c4e6cb4a5f501c8e57bd2b9f,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1762696309056036951,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a19c92c2-78f7-4060-ac8a-b2554d1b04cb,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e33bb0e4b8eb41966938874d3c927e9012ef3133494f78fc3a0915e652a8cc,PodSandboxId:b4f6bbf257db3b8fbac0994f6910a975771946cefa56c4fcedb8bc1329d92bd5,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1762696269236248644,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-7q2d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2f5d2732-44a0-47cb-8c9d-f5e4b42d8bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2f786eb6996484feae9a6edf630d709e85075222b420eeacbd4f5bcdf42fe9,PodSandboxId:d343fe3deeb567979594b26e1177a2a5ab212e7e352b36c5d4d62140943587d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762696013461666904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wkwss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc2b7dd4-023d-4994-9237-fabeae6e63ce,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7de985bd6dba90bd0013a202ba648a3374b5e13c9eac145c1ff3a556ad18b11,PodSandboxId:55a203c6316403aa49a2955a0853041d78c5074838c3bb8ebaa82188ae2d598d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:176269601345111480
5,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zrw7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9011e98a-2a19-48e0-8e28-8bddfcffc50c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccfd82eb8e555e10a2eafc36a68aa554bade3da97f156fb8739c88f2a38cf0e,PodSandboxId:17039e2b70c72e705a132625c23567c3638c1b25f3af9b63ed1849d52da
a2b63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762696012872773465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae48a075-3b00-486c-b8b2-6b2080262987,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7004426713a395e63c67bf1a805682492109c2bf495d3246fcb0f28c29099d7,PodSandboxId:9b614ffdc2151d1580f4fc9869a755d2e41aff7bd3c05802acf032de23b91e7c,Metada
ta:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762696008010989457,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02798d3a566bdf1b79c9a1609aa8851,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeac0983c075eb11541843961479d16f1ab06804bc32ece9e1a68fd86e0050c7,PodSan
dboxId:c63f4500183b22de7cb690d64315a984561f054ec009c59cdf26b6427d6060b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762696008044381598,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274cf2193394c035a9ce4fd611eef33b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:f301eddabdc4748ec29a00b5d541088744c5ab8cc5777cea1ba92eb84008e179,PodSandboxId:b5a20367f799b5092d5b4a95d5392074f211426e75222c44241f261f7b05e763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762696007936737203,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b30b4238b2d99ce79cd53f17bb6da4,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd12d128087ae793d7d792e812c6f6ccca052788ddf81a1e90ea30b64ab4849,PodSandboxId:6bda3fdf294e3217b2c066d240d843a37e0880dcedbf78cb455f103455016be0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762696007910619648,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4c301aa576e79baa0710f6d51bb504,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostP
ort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c23c19796791474d9cbefd6426cedf9b06ba08c535f77b519664e5d5fe0e1da,PodSandboxId:cbb25c67e227ae6d14f5d31b745da29ae24e0b42896047793ff6795b34dbc3bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762695883942673163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wkwss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc2b7dd4-023d-4994-9237-fabeae6e63ce,},Annotations:map[string]string{io.kubernetes.
container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dabded828a263500488aec4eed71cc058df72adef2bcdb48a96af101f8296650,PodSandboxId:8575db975cef01eb89fd28f03c6fa8fdca33842a2041979b632356575beb2d34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa
6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762695883798388910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zrw7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9011e98a-2a19-48e0-8e28-8bddfcffc50c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ec1c8b227e612c946139a2114c20f15d630c80f99ada219f51c961
793c721b,PodSandboxId:209b0ec75c725d6d45ec9e42a7ba7a936de6699df431c66f39fcb3aec80a1ab1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1762695883066023200,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tw9jj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d037d4-fe85-43d4-8322-67e3cf4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6678989530f542c497fe013e487fb90f58d077213ceab56369a11083ac93d8aa,PodSandboxId:8cbe5cd0f783
4ad7420d084807c5915acd62cf10d4ddf89d1d29edf6f0936966,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1762695883122410369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae48a075-3b00-486c-b8b2-6b2080262987,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a05faf8a4dc9342caa1ccb9a3986f838c476148641b63fc61957a6a5f78d3,PodSandboxId:99b115d10596e4e2acea4b39e
1c2773d555e5bcd31c5829f19e770b5d584ef64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1762695878365041555,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274cf2193394c035a9ce4fd611eef33b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:9251976d9d7b91c3dd81cc2547a4867d13c44581df4bf007267d504fee90a86e,PodSandboxId:7d83e0a07a96bc1167687657e35bdf244ca76065332caccf2cbd9d4fe64ff29c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1762695878239061096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02798d3a566bdf1b79c9a1609aa8851,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termination
MessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c5b8301c397c59a54b37b9851ca454244b8fbec40b66bf02f5df0283be65347,PodSandboxId:783c1d6ed75e1986bb80f86c0a3ef9c7b34fe784b7c558e9e71a8dae6e90a890,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1762695878231412184,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b30b4238b2d99ce79cd53f17bb6da4,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c14454c5-ba28-4339-b584-5a0462c693a8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:56:41 functional-419649 crio[6056]: time="2025-11-09 13:56:41.034115336Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4fd761c0-3132-4ac4-b9cf-d0bde14a9b3f name=/runtime.v1.RuntimeService/Version
	Nov 09 13:56:41 functional-419649 crio[6056]: time="2025-11-09 13:56:41.034257091Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4fd761c0-3132-4ac4-b9cf-d0bde14a9b3f name=/runtime.v1.RuntimeService/Version
	Nov 09 13:56:41 functional-419649 crio[6056]: time="2025-11-09 13:56:41.036750962Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24745cb4-347b-43d4-a0dc-d3319dd7308b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:56:41 functional-419649 crio[6056]: time="2025-11-09 13:56:41.038547398Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762696601038514896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177580,},InodesUsed:&UInt64Value{Value:89,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24745cb4-347b-43d4-a0dc-d3319dd7308b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:56:41 functional-419649 crio[6056]: time="2025-11-09 13:56:41.039497597Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c0e78c0-8e74-41ea-bf08-e262447a2c0e name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:56:41 functional-419649 crio[6056]: time="2025-11-09 13:56:41.039588431Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c0e78c0-8e74-41ea-bf08-e262447a2c0e name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:56:41 functional-419649 crio[6056]: time="2025-11-09 13:56:41.040186508Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4bfeb1eaf4e15a5f6e37e7754b859374bfe1b44c20eebb4ebbf05b097e18c3c,PodSandboxId:a1a33c028bd43eb4e7eb2debd0b100c09d8c5ea5c4e6cb4a5f501c8e57bd2b9f,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1762696309056036951,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a19c92c2-78f7-4060-ac8a-b2554d1b04cb,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e33bb0e4b8eb41966938874d3c927e9012ef3133494f78fc3a0915e652a8cc,PodSandboxId:b4f6bbf257db3b8fbac0994f6910a975771946cefa56c4fcedb8bc1329d92bd5,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1762696269236248644,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-7q2d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2f5d2732-44a0-47cb-8c9d-f5e4b42d8bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2f786eb6996484feae9a6edf630d709e85075222b420eeacbd4f5bcdf42fe9,PodSandboxId:d343fe3deeb567979594b26e1177a2a5ab212e7e352b36c5d4d62140943587d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762696013461666904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wkwss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc2b7dd4-023d-4994-9237-fabeae6e63ce,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7de985bd6dba90bd0013a202ba648a3374b5e13c9eac145c1ff3a556ad18b11,PodSandboxId:55a203c6316403aa49a2955a0853041d78c5074838c3bb8ebaa82188ae2d598d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:176269601345111480
5,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zrw7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9011e98a-2a19-48e0-8e28-8bddfcffc50c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccfd82eb8e555e10a2eafc36a68aa554bade3da97f156fb8739c88f2a38cf0e,PodSandboxId:17039e2b70c72e705a132625c23567c3638c1b25f3af9b63ed1849d52da
a2b63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762696012872773465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae48a075-3b00-486c-b8b2-6b2080262987,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7004426713a395e63c67bf1a805682492109c2bf495d3246fcb0f28c29099d7,PodSandboxId:9b614ffdc2151d1580f4fc9869a755d2e41aff7bd3c05802acf032de23b91e7c,Metada
ta:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762696008010989457,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02798d3a566bdf1b79c9a1609aa8851,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeac0983c075eb11541843961479d16f1ab06804bc32ece9e1a68fd86e0050c7,PodSan
dboxId:c63f4500183b22de7cb690d64315a984561f054ec009c59cdf26b6427d6060b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762696008044381598,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274cf2193394c035a9ce4fd611eef33b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:f301eddabdc4748ec29a00b5d541088744c5ab8cc5777cea1ba92eb84008e179,PodSandboxId:b5a20367f799b5092d5b4a95d5392074f211426e75222c44241f261f7b05e763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762696007936737203,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b30b4238b2d99ce79cd53f17bb6da4,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd12d128087ae793d7d792e812c6f6ccca052788ddf81a1e90ea30b64ab4849,PodSandboxId:6bda3fdf294e3217b2c066d240d843a37e0880dcedbf78cb455f103455016be0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762696007910619648,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4c301aa576e79baa0710f6d51bb504,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostP
ort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c23c19796791474d9cbefd6426cedf9b06ba08c535f77b519664e5d5fe0e1da,PodSandboxId:cbb25c67e227ae6d14f5d31b745da29ae24e0b42896047793ff6795b34dbc3bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762695883942673163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wkwss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc2b7dd4-023d-4994-9237-fabeae6e63ce,},Annotations:map[string]string{io.kubernetes.
container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dabded828a263500488aec4eed71cc058df72adef2bcdb48a96af101f8296650,PodSandboxId:8575db975cef01eb89fd28f03c6fa8fdca33842a2041979b632356575beb2d34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa
6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762695883798388910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zrw7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9011e98a-2a19-48e0-8e28-8bddfcffc50c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ec1c8b227e612c946139a2114c20f15d630c80f99ada219f51c961
793c721b,PodSandboxId:209b0ec75c725d6d45ec9e42a7ba7a936de6699df431c66f39fcb3aec80a1ab1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1762695883066023200,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tw9jj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d037d4-fe85-43d4-8322-67e3cf4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6678989530f542c497fe013e487fb90f58d077213ceab56369a11083ac93d8aa,PodSandboxId:8cbe5cd0f783
4ad7420d084807c5915acd62cf10d4ddf89d1d29edf6f0936966,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1762695883122410369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae48a075-3b00-486c-b8b2-6b2080262987,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a05faf8a4dc9342caa1ccb9a3986f838c476148641b63fc61957a6a5f78d3,PodSandboxId:99b115d10596e4e2acea4b39e
1c2773d555e5bcd31c5829f19e770b5d584ef64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1762695878365041555,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274cf2193394c035a9ce4fd611eef33b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:9251976d9d7b91c3dd81cc2547a4867d13c44581df4bf007267d504fee90a86e,PodSandboxId:7d83e0a07a96bc1167687657e35bdf244ca76065332caccf2cbd9d4fe64ff29c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1762695878239061096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02798d3a566bdf1b79c9a1609aa8851,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termination
MessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c5b8301c397c59a54b37b9851ca454244b8fbec40b66bf02f5df0283be65347,PodSandboxId:783c1d6ed75e1986bb80f86c0a3ef9c7b34fe784b7c558e9e71a8dae6e90a890,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1762695878231412184,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b30b4238b2d99ce79cd53f17bb6da4,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c0e78c0-8e74-41ea-bf08-e262447a2c0e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a4bfeb1eaf4e1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e     4 minutes ago       Exited              mount-munger              0                   a1a33c028bd43       busybox-mount
	92e33bb0e4b8e       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6   5 minutes ago       Running             echo-server               0                   b4f6bbf257db3       hello-node-connect-7d85dfc575-7q2d9
	df2f786eb6996       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        9 minutes ago       Running             coredns                   2                   d343fe3deeb56       coredns-66bc5c9577-wkwss
	b7de985bd6dba       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        9 minutes ago       Running             coredns                   2                   55a203c631640       coredns-66bc5c9577-zrw7g
	4ccfd82eb8e55       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                        9 minutes ago       Running             storage-provisioner       3                   17039e2b70c72       storage-provisioner
	eeac0983c075e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                        9 minutes ago       Running             kube-scheduler            2                   c63f4500183b2       kube-scheduler-functional-419649
	a7004426713a3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                        9 minutes ago       Running             etcd                      2                   9b614ffdc2151       etcd-functional-419649
	f301eddabdc47       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                        9 minutes ago       Running             kube-controller-manager   2                   b5a20367f799b       kube-controller-manager-functional-419649
	efd12d128087a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                        9 minutes ago       Running             kube-apiserver            0                   6bda3fdf294e3       kube-apiserver-functional-419649
	5c23c19796791       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        11 minutes ago      Exited              coredns                   1                   cbb25c67e227a       coredns-66bc5c9577-wkwss
	dabded828a263       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        11 minutes ago      Exited              coredns                   1                   8575db975cef0       coredns-66bc5c9577-zrw7g
	6678989530f54       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                        11 minutes ago      Exited              storage-provisioner       2                   8cbe5cd0f7834       storage-provisioner
	a7ec1c8b227e6       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                        11 minutes ago      Exited              kube-proxy                1                   209b0ec75c725       kube-proxy-tw9jj
	455a05faf8a4d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                        12 minutes ago      Exited              kube-scheduler            1                   99b115d10596e       kube-scheduler-functional-419649
	9251976d9d7b9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                        12 minutes ago      Exited              etcd                      1                   7d83e0a07a96b       etcd-functional-419649
	3c5b8301c397c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                        12 minutes ago      Exited              kube-controller-manager   1                   783c1d6ed75e1       kube-controller-manager-functional-419649
	
	
	==> coredns [5c23c19796791474d9cbefd6426cedf9b06ba08c535f77b519664e5d5fe0e1da] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44800 - 53651 "HINFO IN 1400593764380402781.8492229270877848689. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025331213s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b7de985bd6dba90bd0013a202ba648a3374b5e13c9eac145c1ff3a556ad18b11] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57144 - 62700 "HINFO IN 7058232511171921965.4871291774030092335. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.426433128s
	
	
	==> coredns [dabded828a263500488aec4eed71cc058df72adef2bcdb48a96af101f8296650] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49606 - 37184 "HINFO IN 9083893957874782947.4427150369312914047. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.433296592s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [df2f786eb6996484feae9a6edf630d709e85075222b420eeacbd4f5bcdf42fe9] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45303 - 52022 "HINFO IN 6137322651211618462.5033134956876486643. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.038624673s
	
	
	==> describe nodes <==
	Name:               functional-419649
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-419649
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=functional-419649
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T13_43_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:43:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-419649
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 13:56:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 13:55:10 +0000   Sun, 09 Nov 2025 13:43:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 13:55:10 +0000   Sun, 09 Nov 2025 13:43:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 13:55:10 +0000   Sun, 09 Nov 2025 13:43:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 13:55:10 +0000   Sun, 09 Nov 2025 13:43:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.90
	  Hostname:    functional-419649
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 f58fffb24b824db893edaea13eb8cd34
	  System UUID:                f58fffb2-4b82-4db8-93ed-aea13eb8cd34
	  Boot ID:                    10be153f-12a9-4056-a1dd-41beb5dacdf5
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-vgzbw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	  default                     hello-node-connect-7d85dfc575-7q2d9           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	  default                     mysql-5bb876957f-2vzsj                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    4m47s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 coredns-66bc5c9577-wkwss                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 coredns-66bc5c9577-zrw7g                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 etcd-functional-419649                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-functional-419649              250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m49s
	  kube-system                 kube-controller-manager-functional-419649     200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-tw9jj                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-functional-419649              100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-nhqbg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9z7jc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (72%)  700m (35%)
	  memory             752Mi (19%)  1040Mi (26%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x3 over 13m)      kubelet          Node functional-419649 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x3 over 13m)      kubelet          Node functional-419649 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x3 over 13m)      kubelet          Node functional-419649 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     13m                    kubelet          Node functional-419649 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                    kubelet          Node functional-419649 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                    kubelet          Node functional-419649 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                13m                    kubelet          Node functional-419649 status is now: NodeReady
	  Normal  RegisteredNode           13m                    node-controller  Node functional-419649 event: Registered Node functional-419649 in Controller
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node functional-419649 status is now: NodeHasSufficientMemory
	  Normal  Starting                 12m                    kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node functional-419649 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node functional-419649 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                    node-controller  Node functional-419649 event: Registered Node functional-419649 in Controller
	  Normal  Starting                 9m55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m54s (x8 over 9m54s)  kubelet          Node functional-419649 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m54s (x8 over 9m54s)  kubelet          Node functional-419649 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m54s (x7 over 9m54s)  kubelet          Node functional-419649 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m46s                  node-controller  Node functional-419649 event: Registered Node functional-419649 in Controller
	
	
	==> dmesg <==
	[  +1.227326] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000026] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.100058] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.121158] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.109438] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.192670] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.028077] kauditd_printk_skb: 18 callbacks suppressed
	[Nov 9 13:44] kauditd_printk_skb: 249 callbacks suppressed
	[  +5.534068] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.229697] kauditd_printk_skb: 78 callbacks suppressed
	[  +1.140310] kauditd_printk_skb: 57 callbacks suppressed
	[  +0.132391] kauditd_printk_skb: 194 callbacks suppressed
	[ +11.536468] kauditd_printk_skb: 116 callbacks suppressed
	[Nov 9 13:45] kauditd_printk_skb: 12 callbacks suppressed
	[Nov 9 13:46] kauditd_printk_skb: 263 callbacks suppressed
	[  +0.942624] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.198959] kauditd_printk_skb: 150 callbacks suppressed
	[Nov 9 13:47] kauditd_printk_skb: 125 callbacks suppressed
	[Nov 9 13:51] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.083221] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.000364] kauditd_printk_skb: 78 callbacks suppressed
	[  +0.000133] kauditd_printk_skb: 62 callbacks suppressed
	[  +7.412474] kauditd_printk_skb: 62 callbacks suppressed
	[  +0.000033] kauditd_printk_skb: 25 callbacks suppressed
	
	
	==> etcd [9251976d9d7b91c3dd81cc2547a4867d13c44581df4bf007267d504fee90a86e] <==
	{"level":"warn","ts":"2025-11-09T13:44:40.717700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:44:40.747628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:44:40.762504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:44:40.787093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:44:40.809876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:44:40.833047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:44:40.947438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45540","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-09T13:45:03.891974Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-09T13:45:03.892178Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-419649","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.90:2380"],"advertise-client-urls":["https://192.168.39.90:2379"]}
	{"level":"error","ts":"2025-11-09T13:45:03.892264Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-09T13:45:03.892316Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-09T13:45:03.971568Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T13:45:03.971624Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8d381aaacda0b9bd","current-leader-member-id":"8d381aaacda0b9bd"}
	{"level":"warn","ts":"2025-11-09T13:45:03.971649Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-09T13:45:03.971856Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-09T13:45:03.971872Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T13:45:03.971893Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-09T13:45:03.971779Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-09T13:45:03.971967Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.90:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-09T13:45:03.971979Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.90:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-09T13:45:03.971986Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.90:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T13:45:03.975952Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.90:2380"}
	{"level":"error","ts":"2025-11-09T13:45:03.976056Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.90:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T13:45:03.976084Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.90:2380"}
	{"level":"info","ts":"2025-11-09T13:45:03.976113Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-419649","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.90:2380"],"advertise-client-urls":["https://192.168.39.90:2379"]}
	
	
	==> etcd [a7004426713a395e63c67bf1a805682492109c2bf495d3246fcb0f28c29099d7] <==
	{"level":"warn","ts":"2025-11-09T13:46:50.199777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.222667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.239318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.251723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.267293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.284551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.296069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.315968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.326763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.340581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.359060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.373234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.389155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.403233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.419191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.445351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.477707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.487681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.498150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.512979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.520588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.532487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.546948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.557302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.616155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54108","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:56:41 up 13 min,  0 users,  load average: 0.14, 0.21, 0.18
	Linux functional-419649 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Nov  1 20:49:51 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [efd12d128087ae793d7d792e812c6f6ccca052788ddf81a1e90ea30b64ab4849] <==
	I1109 13:46:51.565556       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1109 13:46:51.566773       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1109 13:46:51.569360       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1109 13:46:51.569444       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1109 13:46:51.569452       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1109 13:46:51.571432       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1109 13:46:51.571516       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 13:46:51.583391       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1109 13:46:51.608585       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1109 13:46:51.898682       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 13:46:52.389190       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 13:46:54.236687       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 13:46:54.293952       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1109 13:46:54.332997       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 13:46:54.343624       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 13:46:56.099629       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 13:46:56.298692       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 13:46:56.399527       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 13:51:02.151207       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.49.22"}
	I1109 13:51:07.123619       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.49.89"}
	I1109 13:51:08.337891       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.221.72"}
	I1109 13:51:40.973107       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 13:51:41.359605       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.76.215"}
	I1109 13:51:41.382444       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.72.212"}
	I1109 13:51:54.435031       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.98.201.164"}
	
	
	==> kube-controller-manager [3c5b8301c397c59a54b37b9851ca454244b8fbec40b66bf02f5df0283be65347] <==
	I1109 13:44:46.211484       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1109 13:44:46.211674       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1109 13:44:46.211775       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-419649"
	I1109 13:44:46.211938       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1109 13:44:46.220947       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1109 13:44:46.224727       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1109 13:44:46.224832       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1109 13:44:46.226268       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1109 13:44:46.226303       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1109 13:44:46.226310       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1109 13:44:46.226317       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1109 13:44:46.229235       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1109 13:44:46.231468       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1109 13:44:46.233055       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1109 13:44:46.236148       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1109 13:44:46.242644       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1109 13:44:46.244477       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1109 13:44:46.244739       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1109 13:44:46.245344       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1109 13:44:46.246729       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1109 13:44:46.246887       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1109 13:44:46.248505       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1109 13:44:46.252228       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1109 13:44:46.255601       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 13:44:46.258981       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	
	
	==> kube-controller-manager [f301eddabdc4748ec29a00b5d541088744c5ab8cc5777cea1ba92eb84008e179] <==
	I1109 13:46:55.929401       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1109 13:46:55.935001       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 13:46:55.939490       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1109 13:46:55.943866       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1109 13:46:55.944041       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1109 13:46:55.944130       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1109 13:46:55.944198       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1109 13:46:55.944698       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1109 13:46:55.944849       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1109 13:46:55.944937       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1109 13:46:55.945195       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-419649"
	I1109 13:46:55.945257       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1109 13:46:55.945718       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1109 13:46:55.948124       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1109 13:46:55.949990       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1109 13:46:55.956456       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1109 13:46:55.965862       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1109 13:51:41.099168       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:51:41.133658       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:51:41.142440       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:51:41.145364       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:51:41.165332       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:51:41.174762       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:51:41.175487       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:51:41.182280       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [a7ec1c8b227e612c946139a2114c20f15d630c80f99ada219f51c961793c721b] <==
	I1109 13:44:43.756193       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 13:44:43.857352       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 13:44:43.857411       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.90"]
	E1109 13:44:43.857528       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 13:44:44.033990       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1109 13:44:44.034236       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1109 13:44:44.034359       1 server_linux.go:132] "Using iptables Proxier"
	I1109 13:44:44.063937       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 13:44:44.064227       1 server.go:527] "Version info" version="v1.34.1"
	I1109 13:44:44.064263       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 13:44:44.066418       1 config.go:106] "Starting endpoint slice config controller"
	I1109 13:44:44.066493       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 13:44:44.069250       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 13:44:44.069283       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 13:44:44.075985       1 config.go:200] "Starting service config controller"
	I1109 13:44:44.076022       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 13:44:44.076752       1 config.go:309] "Starting node config controller"
	I1109 13:44:44.076862       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 13:44:44.076872       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 13:44:44.166673       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1109 13:44:44.170507       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 13:44:44.177107       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [455a05faf8a4dc9342caa1ccb9a3986f838c476148641b63fc61957a6a5f78d3] <==
	I1109 13:44:39.692053       1 serving.go:386] Generated self-signed cert in-memory
	W1109 13:44:41.712695       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 13:44:41.712723       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 13:44:41.712732       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 13:44:41.712738       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 13:44:41.813904       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 13:44:41.813983       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 13:44:41.819415       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:44:41.819568       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:44:41.820003       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 13:44:41.820117       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 13:44:41.921485       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:45:03.884393       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1109 13:45:03.884459       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1109 13:45:03.884484       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1109 13:45:03.884518       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:45:03.884632       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1109 13:45:03.884711       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [eeac0983c075eb11541843961479d16f1ab06804bc32ece9e1a68fd86e0050c7] <==
	I1109 13:46:50.361953       1 serving.go:386] Generated self-signed cert in-memory
	W1109 13:46:51.486251       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 13:46:51.486300       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 13:46:51.486312       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 13:46:51.486319       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 13:46:51.563170       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 13:46:51.565842       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 13:46:51.570632       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 13:46:51.573338       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:46:51.573391       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:46:51.573409       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 13:46:51.674422       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 13:56:06 functional-419649 kubelet[6431]: E1109 13:56:06.946976    6431 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\" already exists"
	Nov 09 13:56:06 functional-419649 kubelet[6431]: E1109 13:56:06.947050    6431 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\" already exists" pod="kube-system/kube-proxy-tw9jj"
	Nov 09 13:56:06 functional-419649 kubelet[6431]: E1109 13:56:06.947067    6431 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\" already exists" pod="kube-system/kube-proxy-tw9jj"
	Nov 09 13:56:06 functional-419649 kubelet[6431]: E1109 13:56:06.947119    6431 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-tw9jj_kube-system(64d037d4-fe85-43d4-8322-67e3cf4a7b89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-tw9jj_kube-system(64d037d4-fe85-43d4-8322-67e3cf4a7b89)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\\\" already exists\"" pod="kube-system/kube-proxy-tw9jj" podUID="64d037d4-fe85-43d4-8322-67e3cf4a7b89"
	Nov 09 13:56:07 functional-419649 kubelet[6431]: E1109 13:56:07.286424    6431 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762696567285113119  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 09 13:56:07 functional-419649 kubelet[6431]: E1109 13:56:07.286554    6431 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762696567285113119  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 09 13:56:17 functional-419649 kubelet[6431]: E1109 13:56:17.289395    6431 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762696577288448391  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 09 13:56:17 functional-419649 kubelet[6431]: E1109 13:56:17.289442    6431 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762696577288448391  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 09 13:56:19 functional-419649 kubelet[6431]: E1109 13:56:19.029664    6431 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Nov 09 13:56:19 functional-419649 kubelet[6431]: E1109 13:56:19.029737    6431 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Nov 09 13:56:19 functional-419649 kubelet[6431]: E1109 13:56:19.030061    6431 kuberuntime_manager.go:1449] "Unhandled Error" err="container mysql start failed in pod mysql-5bb876957f-2vzsj_default(33dac2a4-0080-4332-bb7e-013b368be634): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 09 13:56:19 functional-419649 kubelet[6431]: E1109 13:56:19.030093    6431 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-2vzsj" podUID="33dac2a4-0080-4332-bb7e-013b368be634"
	Nov 09 13:56:19 functional-419649 kubelet[6431]: E1109 13:56:19.946484    6431 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\" already exists"
	Nov 09 13:56:19 functional-419649 kubelet[6431]: E1109 13:56:19.946537    6431 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\" already exists" pod="kube-system/kube-proxy-tw9jj"
	Nov 09 13:56:19 functional-419649 kubelet[6431]: E1109 13:56:19.946556    6431 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\" already exists" pod="kube-system/kube-proxy-tw9jj"
	Nov 09 13:56:19 functional-419649 kubelet[6431]: E1109 13:56:19.946639    6431 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-tw9jj_kube-system(64d037d4-fe85-43d4-8322-67e3cf4a7b89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-tw9jj_kube-system(64d037d4-fe85-43d4-8322-67e3cf4a7b89)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\\\" already exists\"" pod="kube-system/kube-proxy-tw9jj" podUID="64d037d4-fe85-43d4-8322-67e3cf4a7b89"
	Nov 09 13:56:27 functional-419649 kubelet[6431]: E1109 13:56:27.291770    6431 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762696587291338486  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 09 13:56:27 functional-419649 kubelet[6431]: E1109 13:56:27.291877    6431 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762696587291338486  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 09 13:56:33 functional-419649 kubelet[6431]: E1109 13:56:33.937614    6431 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-2vzsj" podUID="33dac2a4-0080-4332-bb7e-013b368be634"
	Nov 09 13:56:33 functional-419649 kubelet[6431]: E1109 13:56:33.954489    6431 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\" already exists"
	Nov 09 13:56:33 functional-419649 kubelet[6431]: E1109 13:56:33.954552    6431 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\" already exists" pod="kube-system/kube-proxy-tw9jj"
	Nov 09 13:56:33 functional-419649 kubelet[6431]: E1109 13:56:33.954571    6431 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\" already exists" pod="kube-system/kube-proxy-tw9jj"
	Nov 09 13:56:33 functional-419649 kubelet[6431]: E1109 13:56:33.954624    6431 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-tw9jj_kube-system(64d037d4-fe85-43d4-8322-67e3cf4a7b89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-tw9jj_kube-system(64d037d4-fe85-43d4-8322-67e3cf4a7b89)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\\\" already exists\"" pod="kube-system/kube-proxy-tw9jj" podUID="64d037d4-fe85-43d4-8322-67e3cf4a7b89"
	Nov 09 13:56:37 functional-419649 kubelet[6431]: E1109 13:56:37.295200    6431 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762696597294501786  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 09 13:56:37 functional-419649 kubelet[6431]: E1109 13:56:37.295264    6431 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762696597294501786  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	
	
	==> storage-provisioner [4ccfd82eb8e555e10a2eafc36a68aa554bade3da97f156fb8739c88f2a38cf0e] <==
	W1109 13:56:16.619543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:18.624456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:18.630282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:20.635137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:20.644015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:22.648329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:22.655363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:24.659418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:24.667554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:26.671669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:26.683317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:28.688596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:28.694257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:30.698190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:30.704296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:32.708098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:32.718051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:34.724642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:34.732300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:36.736882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:36.744123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:38.750354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:38.761441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:40.770436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:40.779990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [6678989530f542c497fe013e487fb90f58d077213ceab56369a11083ac93d8aa] <==
	I1109 13:44:43.443213       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 13:44:43.494214       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 13:44:43.509099       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1109 13:44:43.550008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:44:47.009647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:44:51.271035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:44:54.874686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:44:57.929867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:45:00.953468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:45:00.962351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 13:45:00.962538       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 13:45:00.962713       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"edabc5ba-9ba5-4f59-828d-21dd30bf1c29", APIVersion:"v1", ResourceVersion:"533", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-419649_f071ba0b-f946-440f-ad3d-fe78e643a2d5 became leader
	I1109 13:45:00.962870       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-419649_f071ba0b-f946-440f-ad3d-fe78e643a2d5!
	W1109 13:45:00.967758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:45:00.981678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 13:45:01.063910       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-419649_f071ba0b-f946-440f-ad3d-fe78e643a2d5!
	W1109 13:45:02.986290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:45:02.993756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-419649 -n functional-419649
helpers_test.go:269: (dbg) Run:  kubectl --context functional-419649 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-vgzbw mysql-5bb876957f-2vzsj sp-pod dashboard-metrics-scraper-77bf4d6c4c-nhqbg kubernetes-dashboard-855c9754f9-9z7jc
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-419649 describe pod busybox-mount hello-node-75c85bcc94-vgzbw mysql-5bb876957f-2vzsj sp-pod dashboard-metrics-scraper-77bf4d6c4c-nhqbg kubernetes-dashboard-855c9754f9-9z7jc
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-419649 describe pod busybox-mount hello-node-75c85bcc94-vgzbw mysql-5bb876957f-2vzsj sp-pod dashboard-metrics-scraper-77bf4d6c4c-nhqbg kubernetes-dashboard-855c9754f9-9z7jc: exit status 1 (119.144128ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-419649/192.168.39.90
	Start Time:       Sun, 09 Nov 2025 13:51:16 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.14
	IPs:
	  IP:  10.244.0.14
	Containers:
	  mount-munger:
	    Container ID:  cri-o://a4bfeb1eaf4e15a5f6e37e7754b859374bfe1b44c20eebb4ebbf05b097e18c3c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 09 Nov 2025 13:51:49 +0000
	      Finished:     Sun, 09 Nov 2025 13:51:49 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6p5g6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-6p5g6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m25s  default-scheduler  Successfully assigned default/busybox-mount to functional-419649
	  Normal  Pulling    5m25s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m53s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.565s (31.594s including waiting). Image size: 4631262 bytes.
	  Normal  Created    4m53s  kubelet            Created container: mount-munger
	  Normal  Started    4m53s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-vgzbw
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-419649/192.168.39.90
	Start Time:       Sun, 09 Nov 2025 13:51:07 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4fzhz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4fzhz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m35s  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-vgzbw to functional-419649
	  Normal  Pulling    5m35s  kubelet            Pulling image "kicbase/echo-server"
	  Normal  Pulled     5m34s  kubelet            Successfully pulled image "kicbase/echo-server" in 919ms (919ms including waiting). Image size: 4945246 bytes.
	
	
	Name:             mysql-5bb876957f-2vzsj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-419649/192.168.39.90
	Start Time:       Sun, 09 Nov 2025 13:51:54 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.17
	IPs:
	  IP:           10.244.0.17
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hfrjp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hfrjp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  4m48s                  default-scheduler  Successfully assigned default/mysql-5bb876957f-2vzsj to functional-419649
	  Warning  Failed     3m9s                   kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m58s (x2 over 4m47s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     23s (x2 over 3m9s)     kubelet            Error: ErrImagePull
	  Warning  Failed     23s                    kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    9s (x2 over 3m9s)      kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     9s (x2 over 3m9s)      kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-419649/192.168.39.90
	Start Time:       Sun, 09 Nov 2025 13:51:15 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:  10.244.0.13
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9sr2c (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-9sr2c:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m26s                 default-scheduler  Successfully assigned default/sp-pod to functional-419649
	  Warning  Failed     4m56s                 kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m9s (x2 over 4m56s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m9s                  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    119s (x2 over 4m55s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     119s (x2 over 4m55s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    104s (x3 over 5m26s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-nhqbg" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-9z7jc" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-419649 describe pod busybox-mount hello-node-75c85bcc94-vgzbw mysql-5bb876957f-2vzsj sp-pod dashboard-metrics-scraper-77bf4d6c4c-nhqbg kubernetes-dashboard-855c9754f9-9z7jc: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (31.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-419649 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-419649 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-7q2d9" [2f5d2732-44a0-47cb-8c9d-f5e4b42d8bf1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-7q2d9" [2f5d2732-44a0-47cb-8c9d-f5e4b42d8bf1] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.0051973s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.90:30440
functional_test.go:1666: error fetching http://192.168.39.90:30440: Get "http://192.168.39.90:30440": dial tcp 192.168.39.90:30440: connect: connection refused
I1109 13:51:15.765704  553473 retry.go:31] will retry after 1.138117575s: Get "http://192.168.39.90:30440": dial tcp 192.168.39.90:30440: connect: connection refused
functional_test.go:1666: error fetching http://192.168.39.90:30440: Get "http://192.168.39.90:30440": dial tcp 192.168.39.90:30440: connect: connection refused
I1109 13:51:16.905141  553473 retry.go:31] will retry after 1.638496624s: Get "http://192.168.39.90:30440": dial tcp 192.168.39.90:30440: connect: connection refused
functional_test.go:1666: error fetching http://192.168.39.90:30440: Get "http://192.168.39.90:30440": dial tcp 192.168.39.90:30440: connect: connection refused
I1109 13:51:18.544993  553473 retry.go:31] will retry after 1.981679298s: Get "http://192.168.39.90:30440": dial tcp 192.168.39.90:30440: connect: connection refused
functional_test.go:1666: error fetching http://192.168.39.90:30440: Get "http://192.168.39.90:30440": dial tcp 192.168.39.90:30440: connect: connection refused
I1109 13:51:20.528520  553473 retry.go:31] will retry after 4.633218974s: Get "http://192.168.39.90:30440": dial tcp 192.168.39.90:30440: connect: connection refused
functional_test.go:1666: error fetching http://192.168.39.90:30440: Get "http://192.168.39.90:30440": dial tcp 192.168.39.90:30440: connect: connection refused
I1109 13:51:25.162863  553473 retry.go:31] will retry after 3.582460423s: Get "http://192.168.39.90:30440": dial tcp 192.168.39.90:30440: connect: connection refused
E1109 13:51:27.378760  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1666: error fetching http://192.168.39.90:30440: Get "http://192.168.39.90:30440": dial tcp 192.168.39.90:30440: connect: connection refused
I1109 13:51:28.746530  553473 retry.go:31] will retry after 7.621296246s: Get "http://192.168.39.90:30440": dial tcp 192.168.39.90:30440: connect: connection refused
functional_test.go:1666: error fetching http://192.168.39.90:30440: Get "http://192.168.39.90:30440": dial tcp 192.168.39.90:30440: connect: connection refused
functional_test.go:1686: failed to fetch http://192.168.39.90:30440: Get "http://192.168.39.90:30440": dial tcp 192.168.39.90:30440: connect: connection refused
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-419649 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-7q2d9
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-419649/192.168.39.90
Start Time:       Sun, 09 Nov 2025 13:51:08 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Running
IP:               10.244.0.12
IPs:
IP:           10.244.0.12
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   cri-o://92e33bb0e4b8eb41966938874d3c927e9012ef3133494f78fc3a0915e652a8cc
Image:          kicbase/echo-server
Image ID:       9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
Port:           <none>
Host Port:      <none>
State:          Running
Started:      Sun, 09 Nov 2025 13:51:09 +0000
Ready:          True
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2lx7k (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       True 
ContainersReady             True 
PodScheduled                True 
Volumes:
kube-api-access-2lx7k:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type    Reason     Age   From               Message
----    ------     ----  ----               -------
Normal  Scheduled  28s   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-7q2d9 to functional-419649
Normal  Pulling    27s   kubelet            Pulling image "kicbase/echo-server"
Normal  Pulled     27s   kubelet            Successfully pulled image "kicbase/echo-server" in 141ms (141ms including waiting). Image size: 4945246 bytes.
Normal  Created    27s   kubelet            Created container: echo-server
Normal  Started    27s   kubelet            Started container echo-server

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-419649 logs -l app=hello-node-connect
functional_test.go:1622: hello-node logs:
Echo server listening on port 8080.
functional_test.go:1624: (dbg) Run:  kubectl --context functional-419649 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.104.221.72
IPs:                      10.104.221.72
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30440/TCP
Endpoints:                10.244.0.12:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-419649 -n functional-419649
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-419649 logs -n 25: (1.986709253s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ functional-419649 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ image   │ functional-419649 image load --daemon kicbase/echo-server:functional-419649 --alsologtostderr                                                                │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ ssh     │ functional-419649 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                     │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ ssh     │ functional-419649 ssh -n functional-419649 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ ssh     │ functional-419649 ssh echo hello                                                                                                                             │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ ssh     │ functional-419649 ssh cat /etc/hostname                                                                                                                      │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ addons  │ functional-419649 addons list                                                                                                                                │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ addons  │ functional-419649 addons list -o json                                                                                                                        │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ image   │ functional-419649 image ls                                                                                                                                   │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ image   │ functional-419649 image load --daemon kicbase/echo-server:functional-419649 --alsologtostderr                                                                │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ image   │ functional-419649 image ls                                                                                                                                   │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ image   │ functional-419649 image load --daemon kicbase/echo-server:functional-419649 --alsologtostderr                                                                │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ image   │ functional-419649 image ls                                                                                                                                   │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ image   │ functional-419649 image save kicbase/echo-server:functional-419649 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ image   │ functional-419649 image rm kicbase/echo-server:functional-419649 --alsologtostderr                                                                           │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ image   │ functional-419649 image ls                                                                                                                                   │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ image   │ functional-419649 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ image   │ functional-419649 image ls                                                                                                                                   │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ image   │ functional-419649 image save --daemon kicbase/echo-server:functional-419649 --alsologtostderr                                                                │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ service │ functional-419649 service hello-node-connect --url                                                                                                           │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ mount   │ -p functional-419649 /tmp/TestFunctionalparallelMountCmdany-port910271744/001:/mount-9p --alsologtostderr -v=1                                               │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ ssh     │ functional-419649 ssh findmnt -T /mount-9p | grep 9p                                                                                                         │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ ssh     │ functional-419649 ssh findmnt -T /mount-9p | grep 9p                                                                                                         │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ ssh     │ functional-419649 ssh -- ls -la /mount-9p                                                                                                                    │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ ssh     │ functional-419649 ssh cat /mount-9p/test-1762696275541277107                                                                                                 │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 13:45:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 13:45:02.847094  559203 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:45:02.847360  559203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:45:02.847364  559203 out.go:374] Setting ErrFile to fd 2...
	I1109 13:45:02.847367  559203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:45:02.847592  559203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
	I1109 13:45:02.848126  559203 out.go:368] Setting JSON to false
	I1109 13:45:02.849095  559203 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":70052,"bootTime":1762625851,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 13:45:02.849202  559203 start.go:143] virtualization: kvm guest
	I1109 13:45:02.851348  559203 out.go:179] * [functional-419649] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 13:45:02.852776  559203 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 13:45:02.852780  559203 notify.go:221] Checking for updates...
	I1109 13:45:02.855401  559203 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:45:02.856817  559203 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-549598/kubeconfig
	I1109 13:45:02.858238  559203 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 13:45:02.859590  559203 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 13:45:02.861028  559203 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 13:45:02.862871  559203 config.go:182] Loaded profile config "functional-419649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:45:02.862972  559203 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:45:02.900990  559203 out.go:179] * Using the kvm2 driver based on existing profile
	I1109 13:45:02.902194  559203 start.go:309] selected driver: kvm2
	I1109 13:45:02.902203  559203 start.go:930] validating driver "kvm2" against &{Name:functional-419649 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-419649 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:45:02.902347  559203 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 13:45:02.903921  559203 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 13:45:02.903979  559203 cni.go:84] Creating CNI manager for ""
	I1109 13:45:02.904062  559203 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1109 13:45:02.904148  559203 start.go:353] cluster config:
	{Name:functional-419649 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-419649 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:45:02.904275  559203 iso.go:125] acquiring lock: {Name:mk8f0547b4600c0b2d1e831f024251df19b55199 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 13:45:02.906150  559203 out.go:179] * Starting "functional-419649" primary control-plane node in "functional-419649" cluster
	I1109 13:45:02.907462  559203 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:45:02.907493  559203 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-549598/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1109 13:45:02.907509  559203 cache.go:65] Caching tarball of preloaded images
	I1109 13:45:02.907635  559203 preload.go:238] Found /home/jenkins/minikube-integration/21139-549598/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1109 13:45:02.907643  559203 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 13:45:02.907737  559203 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/config.json ...
	I1109 13:45:02.907975  559203 start.go:360] acquireMachinesLock for functional-419649: {Name:mkb2a0b4add9a99b18ce9ab72b74eb5b4fda0e0a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1109 13:45:02.908018  559203 start.go:364] duration metric: took 28.011µs to acquireMachinesLock for "functional-419649"
	I1109 13:45:02.908028  559203 start.go:96] Skipping create...Using existing machine configuration
	I1109 13:45:02.908032  559203 fix.go:54] fixHost starting: 
	I1109 13:45:02.910029  559203 fix.go:112] recreateIfNeeded on functional-419649: state=Running err=<nil>
	W1109 13:45:02.910053  559203 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 13:45:02.911640  559203 out.go:252] * Updating the running kvm2 "functional-419649" VM ...
	I1109 13:45:02.911674  559203 machine.go:94] provisionDockerMachine start ...
	I1109 13:45:02.914748  559203 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:02.915319  559203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
	I1109 13:45:02.915340  559203 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:02.915529  559203 main.go:143] libmachine: Using SSH client type: native
	I1109 13:45:02.915758  559203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1109 13:45:02.915763  559203 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 13:45:03.027083  559203 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-419649
	
	I1109 13:45:03.027113  559203 buildroot.go:166] provisioning hostname "functional-419649"
	I1109 13:45:03.030945  559203 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:03.031447  559203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
	I1109 13:45:03.031465  559203 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:03.031666  559203 main.go:143] libmachine: Using SSH client type: native
	I1109 13:45:03.031990  559203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1109 13:45:03.032002  559203 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-419649 && echo "functional-419649" | sudo tee /etc/hostname
	I1109 13:45:03.164922  559203 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-419649
	
	I1109 13:45:03.168188  559203 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:03.168636  559203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
	I1109 13:45:03.168669  559203 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:03.168894  559203 main.go:143] libmachine: Using SSH client type: native
	I1109 13:45:03.169112  559203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1109 13:45:03.169123  559203 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-419649' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-419649/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-419649' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 13:45:03.289760  559203 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 13:45:03.289832  559203 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21139-549598/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-549598/.minikube}
	I1109 13:45:03.289856  559203 buildroot.go:174] setting up certificates
	I1109 13:45:03.289869  559203 provision.go:84] configureAuth start
	I1109 13:45:03.295306  559203 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:03.295926  559203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
	I1109 13:45:03.295953  559203 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:03.299991  559203 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:03.300685  559203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
	I1109 13:45:03.300716  559203 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:03.301020  559203 provision.go:143] copyHostCerts
	I1109 13:45:03.301095  559203 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-549598/.minikube/cert.pem, removing ...
	I1109 13:45:03.301124  559203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-549598/.minikube/cert.pem
	I1109 13:45:03.301244  559203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-549598/.minikube/cert.pem (1123 bytes)
	I1109 13:45:03.301400  559203 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-549598/.minikube/key.pem, removing ...
	I1109 13:45:03.301405  559203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-549598/.minikube/key.pem
	I1109 13:45:03.301437  559203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-549598/.minikube/key.pem (1679 bytes)
	I1109 13:45:03.301489  559203 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-549598/.minikube/ca.pem, removing ...
	I1109 13:45:03.301492  559203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-549598/.minikube/ca.pem
	I1109 13:45:03.301515  559203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-549598/.minikube/ca.pem (1082 bytes)
	I1109 13:45:03.301590  559203 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-549598/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca-key.pem org=jenkins.functional-419649 san=[127.0.0.1 192.168.39.90 functional-419649 localhost minikube]
	I1109 13:45:03.484770  559203 provision.go:177] copyRemoteCerts
	I1109 13:45:03.484840  559203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 13:45:03.488430  559203 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:03.489084  559203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
	I1109 13:45:03.489108  559203 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:03.489346  559203 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/functional-419649/id_rsa Username:docker}
	I1109 13:45:03.577560  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 13:45:03.625368  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1109 13:45:03.666066  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 13:45:03.705166  559203 provision.go:87] duration metric: took 415.28178ms to configureAuth
	I1109 13:45:03.705190  559203 buildroot.go:189] setting minikube options for container-runtime
	I1109 13:45:03.705397  559203 config.go:182] Loaded profile config "functional-419649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:45:03.709087  559203 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:03.709610  559203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
	I1109 13:45:03.709639  559203 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:03.709881  559203 main.go:143] libmachine: Using SSH client type: native
	I1109 13:45:03.710159  559203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1109 13:45:03.710168  559203 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 13:45:09.565045  559203 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 13:45:09.565064  559203 machine.go:97] duration metric: took 6.65338378s to provisionDockerMachine
	I1109 13:45:09.565075  559203 start.go:293] postStartSetup for "functional-419649" (driver="kvm2")
	I1109 13:45:09.565084  559203 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 13:45:09.565159  559203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 13:45:09.568571  559203 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:09.569078  559203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
	I1109 13:45:09.569096  559203 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:09.569287  559203 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/functional-419649/id_rsa Username:docker}
	I1109 13:45:09.657920  559203 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 13:45:09.664193  559203 info.go:137] Remote host: Buildroot 2025.02
	I1109 13:45:09.664221  559203 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-549598/.minikube/addons for local assets ...
	I1109 13:45:09.664303  559203 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-549598/.minikube/files for local assets ...
	I1109 13:45:09.664376  559203 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem -> 5534732.pem in /etc/ssl/certs
	I1109 13:45:09.664442  559203 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/test/nested/copy/553473/hosts -> hosts in /etc/test/nested/copy/553473
	I1109 13:45:09.664479  559203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/553473
	I1109 13:45:09.679696  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem --> /etc/ssl/certs/5534732.pem (1708 bytes)
	I1109 13:45:09.716342  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/test/nested/copy/553473/hosts --> /etc/test/nested/copy/553473/hosts (40 bytes)
	I1109 13:45:09.752619  559203 start.go:296] duration metric: took 187.524618ms for postStartSetup
	I1109 13:45:09.752668  559203 fix.go:56] duration metric: took 6.844630089s for fixHost
	I1109 13:45:09.756013  559203 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:09.756422  559203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
	I1109 13:45:09.756436  559203 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:09.756616  559203 main.go:143] libmachine: Using SSH client type: native
	I1109 13:45:09.756838  559203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I1109 13:45:09.756844  559203 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1109 13:45:09.930120  559203 main.go:143] libmachine: SSH cmd err, output: <nil>: 1762695909.924449478
	
	I1109 13:45:09.930138  559203 fix.go:216] guest clock: 1762695909.924449478
	I1109 13:45:09.930149  559203 fix.go:229] Guest: 2025-11-09 13:45:09.924449478 +0000 UTC Remote: 2025-11-09 13:45:09.752671487 +0000 UTC m=+6.963111750 (delta=171.777991ms)
	I1109 13:45:09.930174  559203 fix.go:200] guest clock delta is within tolerance: 171.777991ms
	I1109 13:45:09.930181  559203 start.go:83] releasing machines lock for "functional-419649", held for 7.02215711s
	I1109 13:45:09.934331  559203 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:09.934820  559203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
	I1109 13:45:09.934841  559203 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:09.935708  559203 ssh_runner.go:195] Run: cat /version.json
	I1109 13:45:09.935808  559203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 13:45:09.939805  559203 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:09.940152  559203 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:09.940368  559203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
	I1109 13:45:09.940387  559203 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:09.940593  559203 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/functional-419649/id_rsa Username:docker}
	I1109 13:45:09.940627  559203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
	I1109 13:45:09.940648  559203 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:45:09.940989  559203 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/functional-419649/id_rsa Username:docker}
	I1109 13:45:10.137772  559203 ssh_runner.go:195] Run: systemctl --version
	I1109 13:45:10.160746  559203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 13:45:10.404731  559203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 13:45:10.418925  559203 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 13:45:10.418988  559203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 13:45:10.443481  559203 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 13:45:10.443505  559203 start.go:496] detecting cgroup driver to use...
	I1109 13:45:10.443575  559203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 13:45:10.489331  559203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 13:45:10.536067  559203 docker.go:218] disabling cri-docker service (if available) ...
	I1109 13:45:10.536142  559203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 13:45:10.588486  559203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 13:45:10.639617  559203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 13:45:10.987138  559203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 13:45:11.366704  559203 docker.go:234] disabling docker service ...
	I1109 13:45:11.366776  559203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 13:45:11.424407  559203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 13:45:11.463825  559203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 13:45:11.677106  559203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 13:45:11.878848  559203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 13:45:11.899452  559203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 13:45:11.928471  559203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 13:45:11.928537  559203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:45:11.942712  559203 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 13:45:11.942781  559203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:45:11.957392  559203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:45:11.972849  559203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:45:11.988110  559203 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 13:45:12.003608  559203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:45:12.018585  559203 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:45:12.034850  559203 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:45:12.050680  559203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 13:45:12.063559  559203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 13:45:12.077271  559203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:45:12.286219  559203 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 13:46:42.836165  559203 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.549902913s)
	I1109 13:46:42.836225  559203 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 13:46:42.836306  559203 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 13:46:42.844497  559203 start.go:564] Will wait 60s for crictl version
	I1109 13:46:42.844561  559203 ssh_runner.go:195] Run: which crictl
	I1109 13:46:42.850785  559203 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1109 13:46:42.903352  559203 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1109 13:46:42.903428  559203 ssh_runner.go:195] Run: crio --version
	I1109 13:46:42.940290  559203 ssh_runner.go:195] Run: crio --version
	I1109 13:46:42.978442  559203 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1109 13:46:42.983300  559203 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:46:42.983986  559203 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
	I1109 13:46:42.984012  559203 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
	I1109 13:46:42.984294  559203 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1109 13:46:42.992598  559203 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1109 13:46:42.993996  559203 kubeadm.go:884] updating cluster {Name:functional-419649 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:functional-419649 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 13:46:42.994171  559203 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:46:42.994242  559203 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:46:43.052232  559203 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 13:46:43.052245  559203 crio.go:433] Images already preloaded, skipping extraction
	I1109 13:46:43.052301  559203 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:46:43.097545  559203 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 13:46:43.097560  559203 cache_images.go:86] Images are preloaded, skipping loading
	I1109 13:46:43.097567  559203 kubeadm.go:935] updating node { 192.168.39.90 8441 v1.34.1 crio true true} ...
	I1109 13:46:43.097684  559203 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-419649 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-419649 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 13:46:43.097760  559203 ssh_runner.go:195] Run: crio config
	I1109 13:46:43.159738  559203 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1109 13:46:43.159777  559203 cni.go:84] Creating CNI manager for ""
	I1109 13:46:43.159788  559203 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1109 13:46:43.159823  559203 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 13:46:43.159857  559203 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-419649 NodeName:functional-419649 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts
:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 13:46:43.160032  559203 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-419649"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.90"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 13:46:43.160133  559203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 13:46:43.176463  559203 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 13:46:43.176572  559203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 13:46:43.192722  559203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1109 13:46:43.221827  559203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 13:46:43.251089  559203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2067 bytes)
	I1109 13:46:43.280865  559203 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I1109 13:46:43.287562  559203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:46:43.490085  559203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:46:43.513249  559203 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649 for IP: 192.168.39.90
	I1109 13:46:43.513264  559203 certs.go:195] generating shared ca certs ...
	I1109 13:46:43.513284  559203 certs.go:227] acquiring lock for ca certs: {Name:mkc766226c1ec8ac0cc61519ae61374bb0aa3b29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:46:43.513562  559203 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key
	I1109 13:46:43.513603  559203 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key
	I1109 13:46:43.513617  559203 certs.go:257] generating profile certs ...
	I1109 13:46:43.513730  559203 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.key
	I1109 13:46:43.513775  559203 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/apiserver.key.6dc4be3b
	I1109 13:46:43.513839  559203 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/proxy-client.key
	I1109 13:46:43.513949  559203 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/553473.pem (1338 bytes)
	W1109 13:46:43.513987  559203 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-549598/.minikube/certs/553473_empty.pem, impossibly tiny 0 bytes
	I1109 13:46:43.513993  559203 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca-key.pem (1675 bytes)
	I1109 13:46:43.514012  559203 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem (1082 bytes)
	I1109 13:46:43.514030  559203 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem (1123 bytes)
	I1109 13:46:43.514054  559203 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/key.pem (1679 bytes)
	I1109 13:46:43.514103  559203 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem (1708 bytes)
	I1109 13:46:43.514909  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 13:46:43.553049  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 13:46:43.590481  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 13:46:43.627199  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 13:46:43.666998  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1109 13:46:43.704574  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 13:46:43.741698  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 13:46:43.778748  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 13:46:43.817457  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/certs/553473.pem --> /usr/share/ca-certificates/553473.pem (1338 bytes)
	I1109 13:46:43.854768  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem --> /usr/share/ca-certificates/5534732.pem (1708 bytes)
	I1109 13:46:43.893208  559203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 13:46:43.930631  559203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 13:46:43.960296  559203 ssh_runner.go:195] Run: openssl version
	I1109 13:46:43.969312  559203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 13:46:43.989417  559203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:46:43.997651  559203 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:46:43.997781  559203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:46:44.010071  559203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 13:46:44.028511  559203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/553473.pem && ln -fs /usr/share/ca-certificates/553473.pem /etc/ssl/certs/553473.pem"
	I1109 13:46:44.048127  559203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/553473.pem
	I1109 13:46:44.056172  559203 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:42 /usr/share/ca-certificates/553473.pem
	I1109 13:46:44.056246  559203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/553473.pem
	I1109 13:46:44.066875  559203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/553473.pem /etc/ssl/certs/51391683.0"
	I1109 13:46:44.082676  559203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5534732.pem && ln -fs /usr/share/ca-certificates/5534732.pem /etc/ssl/certs/5534732.pem"
	I1109 13:46:44.098817  559203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5534732.pem
	I1109 13:46:44.106669  559203 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:42 /usr/share/ca-certificates/5534732.pem
	I1109 13:46:44.106742  559203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5534732.pem
	I1109 13:46:44.115568  559203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5534732.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 13:46:44.129556  559203 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 13:46:44.136177  559203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 13:46:44.145148  559203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 13:46:44.154077  559203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 13:46:44.163010  559203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 13:46:44.171974  559203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 13:46:44.180832  559203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 13:46:44.190451  559203 kubeadm.go:401] StartCluster: {Name:functional-419649 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:functional-419649 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:46:44.190535  559203 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:46:44.190620  559203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:46:44.239542  559203 cri.go:89] found id: "5c23c19796791474d9cbefd6426cedf9b06ba08c535f77b519664e5d5fe0e1da"
	I1109 13:46:44.239558  559203 cri.go:89] found id: "dabded828a263500488aec4eed71cc058df72adef2bcdb48a96af101f8296650"
	I1109 13:46:44.239562  559203 cri.go:89] found id: "6678989530f542c497fe013e487fb90f58d077213ceab56369a11083ac93d8aa"
	I1109 13:46:44.239564  559203 cri.go:89] found id: "a7ec1c8b227e612c946139a2114c20f15d630c80f99ada219f51c961793c721b"
	I1109 13:46:44.239566  559203 cri.go:89] found id: "455a05faf8a4dc9342caa1ccb9a3986f838c476148641b63fc61957a6a5f78d3"
	I1109 13:46:44.239568  559203 cri.go:89] found id: "9251976d9d7b91c3dd81cc2547a4867d13c44581df4bf007267d504fee90a86e"
	I1109 13:46:44.239569  559203 cri.go:89] found id: "3c5b8301c397c59a54b37b9851ca454244b8fbec40b66bf02f5df0283be65347"
	I1109 13:46:44.239571  559203 cri.go:89] found id: "aba9dc19a06ff98bdfe68ee5b389ed2498b2d1b0320879106ffc77cd914731ac"
	I1109 13:46:44.239573  559203 cri.go:89] found id: ""
	I1109 13:46:44.239626  559203 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-419649 -n functional-419649
helpers_test.go:269: (dbg) Run:  kubectl --context functional-419649 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-vgzbw sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-419649 describe pod busybox-mount hello-node-75c85bcc94-vgzbw sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-419649 describe pod busybox-mount hello-node-75c85bcc94-vgzbw sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-419649/192.168.39.90
	Start Time:       Sun, 09 Nov 2025 13:51:16 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  mount-munger:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6p5g6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-6p5g6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  22s   default-scheduler  Successfully assigned default/busybox-mount to functional-419649
	  Normal  Pulling    22s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             hello-node-75c85bcc94-vgzbw
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-419649/192.168.39.90
	Start Time:       Sun, 09 Nov 2025 13:51:07 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4fzhz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4fzhz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  32s   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-vgzbw to functional-419649
	  Normal  Pulling    32s   kubelet            Pulling image "kicbase/echo-server"
	  Normal  Pulled     31s   kubelet            Successfully pulled image "kicbase/echo-server" in 919ms (919ms including waiting). Image size: 4945246 bytes.
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-419649/192.168.39.90
	Start Time:       Sun, 09 Nov 2025 13:51:15 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9sr2c (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-9sr2c:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  23s   default-scheduler  Successfully assigned default/sp-pod to functional-419649
	  Normal  Pulling    23s   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (31.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (371.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [ae48a075-3b00-486c-b8b2-6b2080262987] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008094393s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-419649 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-419649 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-419649 get pvc myclaim -o=json
I1109 13:51:13.009642  553473 retry.go:31] will retry after 2.447400549s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:83e92f43-6d25-4e97-9a9d-e5f7caeb97c0 ResourceVersion:954 Generation:0 CreationTimestamp:2025-11-09 13:51:12 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-83e92f43-6d25-4e97-9a9d-e5f7caeb97c0 StorageClassName:0xc001e917b0 VolumeMode:0xc001e917c0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-419649 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-419649 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [050d522c-0b3b-45e6-bcc9-4a75faca154f] Pending
helpers_test.go:352: "sp-pod" [050d522c-0b3b-45e6-bcc9-4a75faca154f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-419649 -n functional-419649
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-11-09 13:57:16.023951276 +0000 UTC m=+1706.746425675
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-419649 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-419649 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-419649/192.168.39.90
Start Time:       Sun, 09 Nov 2025 13:51:15 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.13
IPs:
IP:  10.244.0.13
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9sr2c (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-9sr2c:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m                   default-scheduler  Successfully assigned default/sp-pod to functional-419649
Warning  Failed     2m43s                kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    2m18s (x3 over 6m)   kubelet            Pulling image "docker.io/nginx"
Warning  Failed     27s (x2 over 5m30s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     27s (x3 over 5m30s)  kubelet            Error: ErrImagePull
Normal   BackOff    1s (x4 over 5m29s)   kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     1s (x4 over 5m29s)   kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-419649 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-419649 logs sp-pod -n default: exit status 1 (89.318959ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-419649 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-419649 -n functional-419649
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-419649 logs -n 25: (1.826871918s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-419649 ssh stat /mount-9p/created-by-pod                                                                               │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ ssh            │ functional-419649 ssh sudo umount -f /mount-9p                                                                                    │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ mount          │ -p functional-419649 /tmp/TestFunctionalparallelMountCmdspecific-port2962691435/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ ssh            │ functional-419649 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ ssh            │ functional-419649 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ ssh            │ functional-419649 ssh -- ls -la /mount-9p                                                                                         │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ ssh            │ functional-419649 ssh sudo umount -f /mount-9p                                                                                    │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ ssh            │ functional-419649 ssh findmnt -T /mount1                                                                                          │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ mount          │ -p functional-419649 /tmp/TestFunctionalparallelMountCmdVerifyCleanup522304696/001:/mount3 --alsologtostderr -v=1                 │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ mount          │ -p functional-419649 /tmp/TestFunctionalparallelMountCmdVerifyCleanup522304696/001:/mount1 --alsologtostderr -v=1                 │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ mount          │ -p functional-419649 /tmp/TestFunctionalparallelMountCmdVerifyCleanup522304696/001:/mount2 --alsologtostderr -v=1                 │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ ssh            │ functional-419649 ssh findmnt -T /mount1                                                                                          │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ ssh            │ functional-419649 ssh findmnt -T /mount2                                                                                          │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ ssh            │ functional-419649 ssh findmnt -T /mount3                                                                                          │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ mount          │ -p functional-419649 --kill=true                                                                                                  │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ update-context │ functional-419649 update-context --alsologtostderr -v=2                                                                           │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:56 UTC │ 09 Nov 25 13:56 UTC │
	│ update-context │ functional-419649 update-context --alsologtostderr -v=2                                                                           │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:56 UTC │ 09 Nov 25 13:56 UTC │
	│ update-context │ functional-419649 update-context --alsologtostderr -v=2                                                                           │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:56 UTC │ 09 Nov 25 13:56 UTC │
	│ image          │ functional-419649 image ls --format short --alsologtostderr                                                                       │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:56 UTC │ 09 Nov 25 13:56 UTC │
	│ image          │ functional-419649 image ls --format yaml --alsologtostderr                                                                        │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:56 UTC │ 09 Nov 25 13:56 UTC │
	│ ssh            │ functional-419649 ssh pgrep buildkitd                                                                                             │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:56 UTC │                     │
	│ image          │ functional-419649 image build -t localhost/my-image:functional-419649 testdata/build --alsologtostderr                            │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:56 UTC │ 09 Nov 25 13:56 UTC │
	│ image          │ functional-419649 image ls                                                                                                        │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:56 UTC │ 09 Nov 25 13:56 UTC │
	│ image          │ functional-419649 image ls --format json --alsologtostderr                                                                        │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:56 UTC │ 09 Nov 25 13:56 UTC │
	│ image          │ functional-419649 image ls --format table --alsologtostderr                                                                       │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:56 UTC │ 09 Nov 25 13:56 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 13:51:39
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 13:51:39.860979  561575 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:51:39.861103  561575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:51:39.861110  561575 out.go:374] Setting ErrFile to fd 2...
	I1109 13:51:39.861115  561575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:51:39.861532  561575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
	I1109 13:51:39.862180  561575 out.go:368] Setting JSON to false
	I1109 13:51:39.863220  561575 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":70449,"bootTime":1762625851,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 13:51:39.863355  561575 start.go:143] virtualization: kvm guest
	I1109 13:51:39.865116  561575 out.go:179] * [functional-419649] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1109 13:51:39.866506  561575 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 13:51:39.866542  561575 notify.go:221] Checking for updates...
	I1109 13:51:39.869030  561575 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:51:39.870218  561575 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-549598/kubeconfig
	I1109 13:51:39.871342  561575 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 13:51:39.872675  561575 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 13:51:39.873970  561575 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 13:51:39.875604  561575 config.go:182] Loaded profile config "functional-419649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:51:39.876177  561575 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:51:39.915932  561575 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1109 13:51:39.917245  561575 start.go:309] selected driver: kvm2
	I1109 13:51:39.917274  561575 start.go:930] validating driver "kvm2" against &{Name:functional-419649 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-419649 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:51:39.917426  561575 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 13:51:39.919670  561575 out.go:203] 
	W1109 13:51:39.920739  561575 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1109 13:51:39.921941  561575 out.go:203] 
	
	
	==> CRI-O <==
	Nov 09 13:57:16 functional-419649 crio[6056]: time="2025-11-09 13:57:16.991634459Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762696636991605832,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203239,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=11c32ab5-3cdf-4acb-abf8-699678341727 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:57:16 functional-419649 crio[6056]: time="2025-11-09 13:57:16.992461653Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d9f550a7-8ed5-4165-8549-cb9bc4fe0152 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:57:16 functional-419649 crio[6056]: time="2025-11-09 13:57:16.992544224Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d9f550a7-8ed5-4165-8549-cb9bc4fe0152 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:57:16 functional-419649 crio[6056]: time="2025-11-09 13:57:16.992910201Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4bfeb1eaf4e15a5f6e37e7754b859374bfe1b44c20eebb4ebbf05b097e18c3c,PodSandboxId:a1a33c028bd43eb4e7eb2debd0b100c09d8c5ea5c4e6cb4a5f501c8e57bd2b9f,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1762696309056036951,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a19c92c2-78f7-4060-ac8a-b2554d1b04cb,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e33bb0e4b8eb41966938874d3c927e9012ef3133494f78fc3a0915e652a8cc,PodSandboxId:b4f6bbf257db3b8fbac0994f6910a975771946cefa56c4fcedb8bc1329d92bd5,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1762696269236248644,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-7q2d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2f5d2732-44a0-47cb-8c9d-f5e4b42d8bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2f786eb6996484feae9a6edf630d709e85075222b420eeacbd4f5bcdf42fe9,PodSandboxId:d343fe3deeb567979594b26e1177a2a5ab212e7e352b36c5d4d62140943587d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762696013461666904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wkwss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc2b7dd4-023d-4994-9237-fabeae6e63ce,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7de985bd6dba90bd0013a202ba648a3374b5e13c9eac145c1ff3a556ad18b11,PodSandboxId:55a203c6316403aa49a2955a0853041d78c5074838c3bb8ebaa82188ae2d598d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:176269601345111480
5,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zrw7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9011e98a-2a19-48e0-8e28-8bddfcffc50c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccfd82eb8e555e10a2eafc36a68aa554bade3da97f156fb8739c88f2a38cf0e,PodSandboxId:17039e2b70c72e705a132625c23567c3638c1b25f3af9b63ed1849d52da
a2b63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762696012872773465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae48a075-3b00-486c-b8b2-6b2080262987,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7004426713a395e63c67bf1a805682492109c2bf495d3246fcb0f28c29099d7,PodSandboxId:9b614ffdc2151d1580f4fc9869a755d2e41aff7bd3c05802acf032de23b91e7c,Metada
ta:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762696008010989457,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02798d3a566bdf1b79c9a1609aa8851,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeac0983c075eb11541843961479d16f1ab06804bc32ece9e1a68fd86e0050c7,PodSan
dboxId:c63f4500183b22de7cb690d64315a984561f054ec009c59cdf26b6427d6060b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762696008044381598,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274cf2193394c035a9ce4fd611eef33b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:f301eddabdc4748ec29a00b5d541088744c5ab8cc5777cea1ba92eb84008e179,PodSandboxId:b5a20367f799b5092d5b4a95d5392074f211426e75222c44241f261f7b05e763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762696007936737203,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b30b4238b2d99ce79cd53f17bb6da4,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd12d128087ae793d7d792e812c6f6ccca052788ddf81a1e90ea30b64ab4849,PodSandboxId:6bda3fdf294e3217b2c066d240d843a37e0880dcedbf78cb455f103455016be0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762696007910619648,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4c301aa576e79baa0710f6d51bb504,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostP
ort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c23c19796791474d9cbefd6426cedf9b06ba08c535f77b519664e5d5fe0e1da,PodSandboxId:cbb25c67e227ae6d14f5d31b745da29ae24e0b42896047793ff6795b34dbc3bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762695883942673163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wkwss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc2b7dd4-023d-4994-9237-fabeae6e63ce,},Annotations:map[string]string{io.kubernetes.
container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dabded828a263500488aec4eed71cc058df72adef2bcdb48a96af101f8296650,PodSandboxId:8575db975cef01eb89fd28f03c6fa8fdca33842a2041979b632356575beb2d34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa
6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762695883798388910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zrw7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9011e98a-2a19-48e0-8e28-8bddfcffc50c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ec1c8b227e612c946139a2114c20f15d630c80f99ada219f51c961
793c721b,PodSandboxId:209b0ec75c725d6d45ec9e42a7ba7a936de6699df431c66f39fcb3aec80a1ab1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1762695883066023200,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tw9jj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d037d4-fe85-43d4-8322-67e3cf4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6678989530f542c497fe013e487fb90f58d077213ceab56369a11083ac93d8aa,PodSandboxId:8cbe5cd0f783
4ad7420d084807c5915acd62cf10d4ddf89d1d29edf6f0936966,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1762695883122410369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae48a075-3b00-486c-b8b2-6b2080262987,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a05faf8a4dc9342caa1ccb9a3986f838c476148641b63fc61957a6a5f78d3,PodSandboxId:99b115d10596e4e2acea4b39e
1c2773d555e5bcd31c5829f19e770b5d584ef64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1762695878365041555,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274cf2193394c035a9ce4fd611eef33b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:9251976d9d7b91c3dd81cc2547a4867d13c44581df4bf007267d504fee90a86e,PodSandboxId:7d83e0a07a96bc1167687657e35bdf244ca76065332caccf2cbd9d4fe64ff29c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1762695878239061096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02798d3a566bdf1b79c9a1609aa8851,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termination
MessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c5b8301c397c59a54b37b9851ca454244b8fbec40b66bf02f5df0283be65347,PodSandboxId:783c1d6ed75e1986bb80f86c0a3ef9c7b34fe784b7c558e9e71a8dae6e90a890,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1762695878231412184,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b30b4238b2d99ce79cd53f17bb6da4,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d9f550a7-8ed5-4165-8549-cb9bc4fe0152 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:57:17 functional-419649 crio[6056]: time="2025-11-09 13:57:17.057009844Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4e7daf80-648b-4d13-83b7-58e1a7841815 name=/runtime.v1.RuntimeService/Version
	Nov 09 13:57:17 functional-419649 crio[6056]: time="2025-11-09 13:57:17.057082001Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4e7daf80-648b-4d13-83b7-58e1a7841815 name=/runtime.v1.RuntimeService/Version
	Nov 09 13:57:17 functional-419649 crio[6056]: time="2025-11-09 13:57:17.059022688Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=161ce3a0-273d-4501-b454-d6d3bb5b3653 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:57:17 functional-419649 crio[6056]: time="2025-11-09 13:57:17.059849389Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762696637059772218,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203239,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=161ce3a0-273d-4501-b454-d6d3bb5b3653 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:57:17 functional-419649 crio[6056]: time="2025-11-09 13:57:17.060860311Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50e9f986-efad-4508-a19c-f8127383b4d4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:57:17 functional-419649 crio[6056]: time="2025-11-09 13:57:17.060937982Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50e9f986-efad-4508-a19c-f8127383b4d4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:57:17 functional-419649 crio[6056]: time="2025-11-09 13:57:17.061330113Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4bfeb1eaf4e15a5f6e37e7754b859374bfe1b44c20eebb4ebbf05b097e18c3c,PodSandboxId:a1a33c028bd43eb4e7eb2debd0b100c09d8c5ea5c4e6cb4a5f501c8e57bd2b9f,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1762696309056036951,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a19c92c2-78f7-4060-ac8a-b2554d1b04cb,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e33bb0e4b8eb41966938874d3c927e9012ef3133494f78fc3a0915e652a8cc,PodSandboxId:b4f6bbf257db3b8fbac0994f6910a975771946cefa56c4fcedb8bc1329d92bd5,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1762696269236248644,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-7q2d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2f5d2732-44a0-47cb-8c9d-f5e4b42d8bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2f786eb6996484feae9a6edf630d709e85075222b420eeacbd4f5bcdf42fe9,PodSandboxId:d343fe3deeb567979594b26e1177a2a5ab212e7e352b36c5d4d62140943587d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762696013461666904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wkwss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc2b7dd4-023d-4994-9237-fabeae6e63ce,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7de985bd6dba90bd0013a202ba648a3374b5e13c9eac145c1ff3a556ad18b11,PodSandboxId:55a203c6316403aa49a2955a0853041d78c5074838c3bb8ebaa82188ae2d598d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:176269601345111480
5,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zrw7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9011e98a-2a19-48e0-8e28-8bddfcffc50c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccfd82eb8e555e10a2eafc36a68aa554bade3da97f156fb8739c88f2a38cf0e,PodSandboxId:17039e2b70c72e705a132625c23567c3638c1b25f3af9b63ed1849d52da
a2b63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762696012872773465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae48a075-3b00-486c-b8b2-6b2080262987,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7004426713a395e63c67bf1a805682492109c2bf495d3246fcb0f28c29099d7,PodSandboxId:9b614ffdc2151d1580f4fc9869a755d2e41aff7bd3c05802acf032de23b91e7c,Metada
ta:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762696008010989457,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02798d3a566bdf1b79c9a1609aa8851,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeac0983c075eb11541843961479d16f1ab06804bc32ece9e1a68fd86e0050c7,PodSan
dboxId:c63f4500183b22de7cb690d64315a984561f054ec009c59cdf26b6427d6060b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762696008044381598,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274cf2193394c035a9ce4fd611eef33b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:f301eddabdc4748ec29a00b5d541088744c5ab8cc5777cea1ba92eb84008e179,PodSandboxId:b5a20367f799b5092d5b4a95d5392074f211426e75222c44241f261f7b05e763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762696007936737203,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b30b4238b2d99ce79cd53f17bb6da4,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd12d128087ae793d7d792e812c6f6ccca052788ddf81a1e90ea30b64ab4849,PodSandboxId:6bda3fdf294e3217b2c066d240d843a37e0880dcedbf78cb455f103455016be0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762696007910619648,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4c301aa576e79baa0710f6d51bb504,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostP
ort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c23c19796791474d9cbefd6426cedf9b06ba08c535f77b519664e5d5fe0e1da,PodSandboxId:cbb25c67e227ae6d14f5d31b745da29ae24e0b42896047793ff6795b34dbc3bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762695883942673163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wkwss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc2b7dd4-023d-4994-9237-fabeae6e63ce,},Annotations:map[string]string{io.kubernetes.
container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dabded828a263500488aec4eed71cc058df72adef2bcdb48a96af101f8296650,PodSandboxId:8575db975cef01eb89fd28f03c6fa8fdca33842a2041979b632356575beb2d34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa
6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762695883798388910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zrw7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9011e98a-2a19-48e0-8e28-8bddfcffc50c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ec1c8b227e612c946139a2114c20f15d630c80f99ada219f51c961
793c721b,PodSandboxId:209b0ec75c725d6d45ec9e42a7ba7a936de6699df431c66f39fcb3aec80a1ab1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1762695883066023200,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tw9jj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d037d4-fe85-43d4-8322-67e3cf4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6678989530f542c497fe013e487fb90f58d077213ceab56369a11083ac93d8aa,PodSandboxId:8cbe5cd0f783
4ad7420d084807c5915acd62cf10d4ddf89d1d29edf6f0936966,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1762695883122410369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae48a075-3b00-486c-b8b2-6b2080262987,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a05faf8a4dc9342caa1ccb9a3986f838c476148641b63fc61957a6a5f78d3,PodSandboxId:99b115d10596e4e2acea4b39e
1c2773d555e5bcd31c5829f19e770b5d584ef64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1762695878365041555,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274cf2193394c035a9ce4fd611eef33b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:9251976d9d7b91c3dd81cc2547a4867d13c44581df4bf007267d504fee90a86e,PodSandboxId:7d83e0a07a96bc1167687657e35bdf244ca76065332caccf2cbd9d4fe64ff29c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1762695878239061096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02798d3a566bdf1b79c9a1609aa8851,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termination
MessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c5b8301c397c59a54b37b9851ca454244b8fbec40b66bf02f5df0283be65347,PodSandboxId:783c1d6ed75e1986bb80f86c0a3ef9c7b34fe784b7c558e9e71a8dae6e90a890,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1762695878231412184,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b30b4238b2d99ce79cd53f17bb6da4,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50e9f986-efad-4508-a19c-f8127383b4d4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:57:17 functional-419649 crio[6056]: time="2025-11-09 13:57:17.108883631Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b8006650-6b6d-4bc8-bd03-d35b8bd0a24d name=/runtime.v1.RuntimeService/Version
	Nov 09 13:57:17 functional-419649 crio[6056]: time="2025-11-09 13:57:17.108985099Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b8006650-6b6d-4bc8-bd03-d35b8bd0a24d name=/runtime.v1.RuntimeService/Version
	Nov 09 13:57:17 functional-419649 crio[6056]: time="2025-11-09 13:57:17.111320990Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=056bef54-c6d6-414b-853a-28a968c69d95 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:57:17 functional-419649 crio[6056]: time="2025-11-09 13:57:17.112660008Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762696637112559003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203239,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=056bef54-c6d6-414b-853a-28a968c69d95 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:57:17 functional-419649 crio[6056]: time="2025-11-09 13:57:17.114388178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a945d59c-d589-4387-bf0f-6331904cbf98 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:57:17 functional-419649 crio[6056]: time="2025-11-09 13:57:17.114478332Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a945d59c-d589-4387-bf0f-6331904cbf98 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:57:17 functional-419649 crio[6056]: time="2025-11-09 13:57:17.116308761Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4bfeb1eaf4e15a5f6e37e7754b859374bfe1b44c20eebb4ebbf05b097e18c3c,PodSandboxId:a1a33c028bd43eb4e7eb2debd0b100c09d8c5ea5c4e6cb4a5f501c8e57bd2b9f,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1762696309056036951,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a19c92c2-78f7-4060-ac8a-b2554d1b04cb,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e33bb0e4b8eb41966938874d3c927e9012ef3133494f78fc3a0915e652a8cc,PodSandboxId:b4f6bbf257db3b8fbac0994f6910a975771946cefa56c4fcedb8bc1329d92bd5,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1762696269236248644,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-7q2d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2f5d2732-44a0-47cb-8c9d-f5e4b42d8bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2f786eb6996484feae9a6edf630d709e85075222b420eeacbd4f5bcdf42fe9,PodSandboxId:d343fe3deeb567979594b26e1177a2a5ab212e7e352b36c5d4d62140943587d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762696013461666904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wkwss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc2b7dd4-023d-4994-9237-fabeae6e63ce,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7de985bd6dba90bd0013a202ba648a3374b5e13c9eac145c1ff3a556ad18b11,PodSandboxId:55a203c6316403aa49a2955a0853041d78c5074838c3bb8ebaa82188ae2d598d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:176269601345111480
5,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zrw7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9011e98a-2a19-48e0-8e28-8bddfcffc50c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccfd82eb8e555e10a2eafc36a68aa554bade3da97f156fb8739c88f2a38cf0e,PodSandboxId:17039e2b70c72e705a132625c23567c3638c1b25f3af9b63ed1849d52da
a2b63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762696012872773465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae48a075-3b00-486c-b8b2-6b2080262987,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7004426713a395e63c67bf1a805682492109c2bf495d3246fcb0f28c29099d7,PodSandboxId:9b614ffdc2151d1580f4fc9869a755d2e41aff7bd3c05802acf032de23b91e7c,Metada
ta:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762696008010989457,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02798d3a566bdf1b79c9a1609aa8851,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeac0983c075eb11541843961479d16f1ab06804bc32ece9e1a68fd86e0050c7,PodSan
dboxId:c63f4500183b22de7cb690d64315a984561f054ec009c59cdf26b6427d6060b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762696008044381598,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274cf2193394c035a9ce4fd611eef33b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:f301eddabdc4748ec29a00b5d541088744c5ab8cc5777cea1ba92eb84008e179,PodSandboxId:b5a20367f799b5092d5b4a95d5392074f211426e75222c44241f261f7b05e763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762696007936737203,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b30b4238b2d99ce79cd53f17bb6da4,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd12d128087ae793d7d792e812c6f6ccca052788ddf81a1e90ea30b64ab4849,PodSandboxId:6bda3fdf294e3217b2c066d240d843a37e0880dcedbf78cb455f103455016be0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762696007910619648,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4c301aa576e79baa0710f6d51bb504,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostP
ort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c23c19796791474d9cbefd6426cedf9b06ba08c535f77b519664e5d5fe0e1da,PodSandboxId:cbb25c67e227ae6d14f5d31b745da29ae24e0b42896047793ff6795b34dbc3bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762695883942673163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wkwss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc2b7dd4-023d-4994-9237-fabeae6e63ce,},Annotations:map[string]string{io.kubernetes.
container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dabded828a263500488aec4eed71cc058df72adef2bcdb48a96af101f8296650,PodSandboxId:8575db975cef01eb89fd28f03c6fa8fdca33842a2041979b632356575beb2d34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa
6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762695883798388910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zrw7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9011e98a-2a19-48e0-8e28-8bddfcffc50c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ec1c8b227e612c946139a2114c20f15d630c80f99ada219f51c961
793c721b,PodSandboxId:209b0ec75c725d6d45ec9e42a7ba7a936de6699df431c66f39fcb3aec80a1ab1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1762695883066023200,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tw9jj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d037d4-fe85-43d4-8322-67e3cf4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6678989530f542c497fe013e487fb90f58d077213ceab56369a11083ac93d8aa,PodSandboxId:8cbe5cd0f783
4ad7420d084807c5915acd62cf10d4ddf89d1d29edf6f0936966,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1762695883122410369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae48a075-3b00-486c-b8b2-6b2080262987,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a05faf8a4dc9342caa1ccb9a3986f838c476148641b63fc61957a6a5f78d3,PodSandboxId:99b115d10596e4e2acea4b39e
1c2773d555e5bcd31c5829f19e770b5d584ef64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1762695878365041555,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274cf2193394c035a9ce4fd611eef33b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:9251976d9d7b91c3dd81cc2547a4867d13c44581df4bf007267d504fee90a86e,PodSandboxId:7d83e0a07a96bc1167687657e35bdf244ca76065332caccf2cbd9d4fe64ff29c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1762695878239061096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02798d3a566bdf1b79c9a1609aa8851,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termination
MessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c5b8301c397c59a54b37b9851ca454244b8fbec40b66bf02f5df0283be65347,PodSandboxId:783c1d6ed75e1986bb80f86c0a3ef9c7b34fe784b7c558e9e71a8dae6e90a890,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1762695878231412184,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b30b4238b2d99ce79cd53f17bb6da4,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a945d59c-d589-4387-bf0f-6331904cbf98 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:57:17 functional-419649 crio[6056]: time="2025-11-09 13:57:17.170857017Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ecaa33c7-cc7a-4211-979b-93410f547be6 name=/runtime.v1.RuntimeService/Version
	Nov 09 13:57:17 functional-419649 crio[6056]: time="2025-11-09 13:57:17.170960426Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ecaa33c7-cc7a-4211-979b-93410f547be6 name=/runtime.v1.RuntimeService/Version
	Nov 09 13:57:17 functional-419649 crio[6056]: time="2025-11-09 13:57:17.172602670Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b390395a-bd63-4cb4-9090-2cc18e940551 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:57:17 functional-419649 crio[6056]: time="2025-11-09 13:57:17.173389422Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762696637173361650,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203239,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b390395a-bd63-4cb4-9090-2cc18e940551 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 13:57:17 functional-419649 crio[6056]: time="2025-11-09 13:57:17.173941326Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92854141-9bdd-4196-911b-07fbc1dc553f name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:57:17 functional-419649 crio[6056]: time="2025-11-09 13:57:17.173997070Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92854141-9bdd-4196-911b-07fbc1dc553f name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 13:57:17 functional-419649 crio[6056]: time="2025-11-09 13:57:17.174365737Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4bfeb1eaf4e15a5f6e37e7754b859374bfe1b44c20eebb4ebbf05b097e18c3c,PodSandboxId:a1a33c028bd43eb4e7eb2debd0b100c09d8c5ea5c4e6cb4a5f501c8e57bd2b9f,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1762696309056036951,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a19c92c2-78f7-4060-ac8a-b2554d1b04cb,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e33bb0e4b8eb41966938874d3c927e9012ef3133494f78fc3a0915e652a8cc,PodSandboxId:b4f6bbf257db3b8fbac0994f6910a975771946cefa56c4fcedb8bc1329d92bd5,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1762696269236248644,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-7q2d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2f5d2732-44a0-47cb-8c9d-f5e4b42d8bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2f786eb6996484feae9a6edf630d709e85075222b420eeacbd4f5bcdf42fe9,PodSandboxId:d343fe3deeb567979594b26e1177a2a5ab212e7e352b36c5d4d62140943587d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762696013461666904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wkwss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc2b7dd4-023d-4994-9237-fabeae6e63ce,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7de985bd6dba90bd0013a202ba648a3374b5e13c9eac145c1ff3a556ad18b11,PodSandboxId:55a203c6316403aa49a2955a0853041d78c5074838c3bb8ebaa82188ae2d598d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:176269601345111480
5,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zrw7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9011e98a-2a19-48e0-8e28-8bddfcffc50c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccfd82eb8e555e10a2eafc36a68aa554bade3da97f156fb8739c88f2a38cf0e,PodSandboxId:17039e2b70c72e705a132625c23567c3638c1b25f3af9b63ed1849d52da
a2b63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762696012872773465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae48a075-3b00-486c-b8b2-6b2080262987,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7004426713a395e63c67bf1a805682492109c2bf495d3246fcb0f28c29099d7,PodSandboxId:9b614ffdc2151d1580f4fc9869a755d2e41aff7bd3c05802acf032de23b91e7c,Metada
ta:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762696008010989457,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02798d3a566bdf1b79c9a1609aa8851,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeac0983c075eb11541843961479d16f1ab06804bc32ece9e1a68fd86e0050c7,PodSan
dboxId:c63f4500183b22de7cb690d64315a984561f054ec009c59cdf26b6427d6060b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762696008044381598,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274cf2193394c035a9ce4fd611eef33b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:f301eddabdc4748ec29a00b5d541088744c5ab8cc5777cea1ba92eb84008e179,PodSandboxId:b5a20367f799b5092d5b4a95d5392074f211426e75222c44241f261f7b05e763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762696007936737203,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b30b4238b2d99ce79cd53f17bb6da4,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd12d128087ae793d7d792e812c6f6ccca052788ddf81a1e90ea30b64ab4849,PodSandboxId:6bda3fdf294e3217b2c066d240d843a37e0880dcedbf78cb455f103455016be0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762696007910619648,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4c301aa576e79baa0710f6d51bb504,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostP
ort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c23c19796791474d9cbefd6426cedf9b06ba08c535f77b519664e5d5fe0e1da,PodSandboxId:cbb25c67e227ae6d14f5d31b745da29ae24e0b42896047793ff6795b34dbc3bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762695883942673163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wkwss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc2b7dd4-023d-4994-9237-fabeae6e63ce,},Annotations:map[string]string{io.kubernetes.
container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dabded828a263500488aec4eed71cc058df72adef2bcdb48a96af101f8296650,PodSandboxId:8575db975cef01eb89fd28f03c6fa8fdca33842a2041979b632356575beb2d34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa
6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762695883798388910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zrw7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9011e98a-2a19-48e0-8e28-8bddfcffc50c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ec1c8b227e612c946139a2114c20f15d630c80f99ada219f51c961
793c721b,PodSandboxId:209b0ec75c725d6d45ec9e42a7ba7a936de6699df431c66f39fcb3aec80a1ab1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1762695883066023200,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tw9jj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d037d4-fe85-43d4-8322-67e3cf4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6678989530f542c497fe013e487fb90f58d077213ceab56369a11083ac93d8aa,PodSandboxId:8cbe5cd0f783
4ad7420d084807c5915acd62cf10d4ddf89d1d29edf6f0936966,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1762695883122410369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae48a075-3b00-486c-b8b2-6b2080262987,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a05faf8a4dc9342caa1ccb9a3986f838c476148641b63fc61957a6a5f78d3,PodSandboxId:99b115d10596e4e2acea4b39e
1c2773d555e5bcd31c5829f19e770b5d584ef64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1762695878365041555,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274cf2193394c035a9ce4fd611eef33b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:9251976d9d7b91c3dd81cc2547a4867d13c44581df4bf007267d504fee90a86e,PodSandboxId:7d83e0a07a96bc1167687657e35bdf244ca76065332caccf2cbd9d4fe64ff29c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1762695878239061096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02798d3a566bdf1b79c9a1609aa8851,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termination
MessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c5b8301c397c59a54b37b9851ca454244b8fbec40b66bf02f5df0283be65347,PodSandboxId:783c1d6ed75e1986bb80f86c0a3ef9c7b34fe784b7c558e9e71a8dae6e90a890,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1762695878231412184,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b30b4238b2d99ce79cd53f17bb6da4,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=92854141-9bdd-4196-911b-07fbc1dc553f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a4bfeb1eaf4e1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e     5 minutes ago       Exited              mount-munger              0                   a1a33c028bd43       busybox-mount
	92e33bb0e4b8e       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6   6 minutes ago       Running             echo-server               0                   b4f6bbf257db3       hello-node-connect-7d85dfc575-7q2d9
	df2f786eb6996       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        10 minutes ago      Running             coredns                   2                   d343fe3deeb56       coredns-66bc5c9577-wkwss
	b7de985bd6dba       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        10 minutes ago      Running             coredns                   2                   55a203c631640       coredns-66bc5c9577-zrw7g
	4ccfd82eb8e55       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                        10 minutes ago      Running             storage-provisioner       3                   17039e2b70c72       storage-provisioner
	eeac0983c075e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                        10 minutes ago      Running             kube-scheduler            2                   c63f4500183b2       kube-scheduler-functional-419649
	a7004426713a3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                        10 minutes ago      Running             etcd                      2                   9b614ffdc2151       etcd-functional-419649
	f301eddabdc47       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                        10 minutes ago      Running             kube-controller-manager   2                   b5a20367f799b       kube-controller-manager-functional-419649
	efd12d128087a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                        10 minutes ago      Running             kube-apiserver            0                   6bda3fdf294e3       kube-apiserver-functional-419649
	5c23c19796791       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        12 minutes ago      Exited              coredns                   1                   cbb25c67e227a       coredns-66bc5c9577-wkwss
	dabded828a263       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        12 minutes ago      Exited              coredns                   1                   8575db975cef0       coredns-66bc5c9577-zrw7g
	6678989530f54       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                        12 minutes ago      Exited              storage-provisioner       2                   8cbe5cd0f7834       storage-provisioner
	a7ec1c8b227e6       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                        12 minutes ago      Exited              kube-proxy                1                   209b0ec75c725       kube-proxy-tw9jj
	455a05faf8a4d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                        12 minutes ago      Exited              kube-scheduler            1                   99b115d10596e       kube-scheduler-functional-419649
	9251976d9d7b9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                        12 minutes ago      Exited              etcd                      1                   7d83e0a07a96b       etcd-functional-419649
	3c5b8301c397c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                        12 minutes ago      Exited              kube-controller-manager   1                   783c1d6ed75e1       kube-controller-manager-functional-419649
	
	
	==> coredns [5c23c19796791474d9cbefd6426cedf9b06ba08c535f77b519664e5d5fe0e1da] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44800 - 53651 "HINFO IN 1400593764380402781.8492229270877848689. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025331213s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b7de985bd6dba90bd0013a202ba648a3374b5e13c9eac145c1ff3a556ad18b11] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57144 - 62700 "HINFO IN 7058232511171921965.4871291774030092335. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.426433128s
	
	
	==> coredns [dabded828a263500488aec4eed71cc058df72adef2bcdb48a96af101f8296650] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49606 - 37184 "HINFO IN 9083893957874782947.4427150369312914047. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.433296592s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [df2f786eb6996484feae9a6edf630d709e85075222b420eeacbd4f5bcdf42fe9] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45303 - 52022 "HINFO IN 6137322651211618462.5033134956876486643. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.038624673s
	
	
	==> describe nodes <==
	Name:               functional-419649
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-419649
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=functional-419649
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T13_43_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:43:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-419649
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 13:57:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 13:56:52 +0000   Sun, 09 Nov 2025 13:43:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 13:56:52 +0000   Sun, 09 Nov 2025 13:43:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 13:56:52 +0000   Sun, 09 Nov 2025 13:43:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 13:56:52 +0000   Sun, 09 Nov 2025 13:43:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.90
	  Hostname:    functional-419649
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 f58fffb24b824db893edaea13eb8cd34
	  System UUID:                f58fffb2-4b82-4db8-93ed-aea13eb8cd34
	  Boot ID:                    10be153f-12a9-4056-a1dd-41beb5dacdf5
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-vgzbw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	  default                     hello-node-connect-7d85dfc575-7q2d9           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	  default                     mysql-5bb876957f-2vzsj                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    5m23s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 coredns-66bc5c9577-wkwss                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 coredns-66bc5c9577-zrw7g                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 etcd-functional-419649                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-functional-419649              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-419649     200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-tw9jj                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-functional-419649              100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-nhqbg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9z7jc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (72%)  700m (35%)
	  memory             752Mi (19%)  1040Mi (26%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x3 over 13m)  kubelet          Node functional-419649 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x3 over 13m)  kubelet          Node functional-419649 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x3 over 13m)  kubelet          Node functional-419649 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     13m                kubelet          Node functional-419649 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node functional-419649 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node functional-419649 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                13m                kubelet          Node functional-419649 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node functional-419649 event: Registered Node functional-419649 in Controller
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-419649 status is now: NodeHasSufficientMemory
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-419649 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node functional-419649 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node functional-419649 event: Registered Node functional-419649 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-419649 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-419649 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-419649 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-419649 event: Registered Node functional-419649 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.100058] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.121158] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.109438] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.192670] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.028077] kauditd_printk_skb: 18 callbacks suppressed
	[Nov 9 13:44] kauditd_printk_skb: 249 callbacks suppressed
	[  +5.534068] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.229697] kauditd_printk_skb: 78 callbacks suppressed
	[  +1.140310] kauditd_printk_skb: 57 callbacks suppressed
	[  +0.132391] kauditd_printk_skb: 194 callbacks suppressed
	[ +11.536468] kauditd_printk_skb: 116 callbacks suppressed
	[Nov 9 13:45] kauditd_printk_skb: 12 callbacks suppressed
	[Nov 9 13:46] kauditd_printk_skb: 263 callbacks suppressed
	[  +0.942624] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.198959] kauditd_printk_skb: 150 callbacks suppressed
	[Nov 9 13:47] kauditd_printk_skb: 125 callbacks suppressed
	[Nov 9 13:51] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.083221] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.000364] kauditd_printk_skb: 78 callbacks suppressed
	[  +0.000133] kauditd_printk_skb: 62 callbacks suppressed
	[  +7.412474] kauditd_printk_skb: 62 callbacks suppressed
	[  +0.000033] kauditd_printk_skb: 25 callbacks suppressed
	[Nov 9 13:56] crun[10355]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +3.099297] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [9251976d9d7b91c3dd81cc2547a4867d13c44581df4bf007267d504fee90a86e] <==
	{"level":"warn","ts":"2025-11-09T13:44:40.717700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:44:40.747628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:44:40.762504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:44:40.787093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:44:40.809876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:44:40.833047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:44:40.947438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45540","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-09T13:45:03.891974Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-09T13:45:03.892178Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-419649","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.90:2380"],"advertise-client-urls":["https://192.168.39.90:2379"]}
	{"level":"error","ts":"2025-11-09T13:45:03.892264Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-09T13:45:03.892316Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-09T13:45:03.971568Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T13:45:03.971624Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8d381aaacda0b9bd","current-leader-member-id":"8d381aaacda0b9bd"}
	{"level":"warn","ts":"2025-11-09T13:45:03.971649Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-09T13:45:03.971856Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-09T13:45:03.971872Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T13:45:03.971893Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-09T13:45:03.971779Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-09T13:45:03.971967Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.90:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-09T13:45:03.971979Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.90:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-09T13:45:03.971986Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.90:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T13:45:03.975952Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.90:2380"}
	{"level":"error","ts":"2025-11-09T13:45:03.976056Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.90:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T13:45:03.976084Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.90:2380"}
	{"level":"info","ts":"2025-11-09T13:45:03.976113Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-419649","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.90:2380"],"advertise-client-urls":["https://192.168.39.90:2379"]}
	
	
	==> etcd [a7004426713a395e63c67bf1a805682492109c2bf495d3246fcb0f28c29099d7] <==
	{"level":"warn","ts":"2025-11-09T13:46:50.251723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.267293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.284551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.296069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.315968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.326763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.340581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.359060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.373234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.389155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.403233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.419191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.445351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.477707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.487681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.498150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.512979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.520588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.532487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.546948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.557302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.616155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54108","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-09T13:56:49.442324Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1072}
	{"level":"info","ts":"2025-11-09T13:56:49.476661Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1072,"took":"33.865086ms","hash":4179198837,"current-db-size-bytes":3330048,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1560576,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-11-09T13:56:49.476717Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4179198837,"revision":1072,"compact-revision":-1}
	
	
	==> kernel <==
	 13:57:17 up 14 min,  0 users,  load average: 0.18, 0.22, 0.18
	Linux functional-419649 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Nov  1 20:49:51 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [efd12d128087ae793d7d792e812c6f6ccca052788ddf81a1e90ea30b64ab4849] <==
	I1109 13:46:51.566773       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1109 13:46:51.569360       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1109 13:46:51.569444       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1109 13:46:51.569452       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1109 13:46:51.571432       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1109 13:46:51.571516       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 13:46:51.583391       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1109 13:46:51.608585       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1109 13:46:51.898682       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 13:46:52.389190       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 13:46:54.236687       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 13:46:54.293952       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1109 13:46:54.332997       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 13:46:54.343624       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 13:46:56.099629       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 13:46:56.298692       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 13:46:56.399527       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 13:51:02.151207       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.49.22"}
	I1109 13:51:07.123619       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.49.89"}
	I1109 13:51:08.337891       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.221.72"}
	I1109 13:51:40.973107       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 13:51:41.359605       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.76.215"}
	I1109 13:51:41.382444       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.72.212"}
	I1109 13:51:54.435031       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.98.201.164"}
	I1109 13:56:51.482896       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [3c5b8301c397c59a54b37b9851ca454244b8fbec40b66bf02f5df0283be65347] <==
	I1109 13:44:46.211484       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1109 13:44:46.211674       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1109 13:44:46.211775       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-419649"
	I1109 13:44:46.211938       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1109 13:44:46.220947       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1109 13:44:46.224727       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1109 13:44:46.224832       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1109 13:44:46.226268       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1109 13:44:46.226303       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1109 13:44:46.226310       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1109 13:44:46.226317       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1109 13:44:46.229235       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1109 13:44:46.231468       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1109 13:44:46.233055       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1109 13:44:46.236148       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1109 13:44:46.242644       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1109 13:44:46.244477       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1109 13:44:46.244739       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1109 13:44:46.245344       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1109 13:44:46.246729       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1109 13:44:46.246887       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1109 13:44:46.248505       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1109 13:44:46.252228       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1109 13:44:46.255601       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 13:44:46.258981       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	
	
	==> kube-controller-manager [f301eddabdc4748ec29a00b5d541088744c5ab8cc5777cea1ba92eb84008e179] <==
	I1109 13:46:55.929401       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1109 13:46:55.935001       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 13:46:55.939490       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1109 13:46:55.943866       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1109 13:46:55.944041       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1109 13:46:55.944130       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1109 13:46:55.944198       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1109 13:46:55.944698       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1109 13:46:55.944849       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1109 13:46:55.944937       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1109 13:46:55.945195       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-419649"
	I1109 13:46:55.945257       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1109 13:46:55.945718       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1109 13:46:55.948124       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1109 13:46:55.949990       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1109 13:46:55.956456       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1109 13:46:55.965862       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1109 13:51:41.099168       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:51:41.133658       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:51:41.142440       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:51:41.145364       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:51:41.165332       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:51:41.174762       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:51:41.175487       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:51:41.182280       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [a7ec1c8b227e612c946139a2114c20f15d630c80f99ada219f51c961793c721b] <==
	I1109 13:44:43.756193       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 13:44:43.857352       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 13:44:43.857411       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.90"]
	E1109 13:44:43.857528       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 13:44:44.033990       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1109 13:44:44.034236       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1109 13:44:44.034359       1 server_linux.go:132] "Using iptables Proxier"
	I1109 13:44:44.063937       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 13:44:44.064227       1 server.go:527] "Version info" version="v1.34.1"
	I1109 13:44:44.064263       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 13:44:44.066418       1 config.go:106] "Starting endpoint slice config controller"
	I1109 13:44:44.066493       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 13:44:44.069250       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 13:44:44.069283       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 13:44:44.075985       1 config.go:200] "Starting service config controller"
	I1109 13:44:44.076022       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 13:44:44.076752       1 config.go:309] "Starting node config controller"
	I1109 13:44:44.076862       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 13:44:44.076872       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 13:44:44.166673       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1109 13:44:44.170507       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 13:44:44.177107       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [455a05faf8a4dc9342caa1ccb9a3986f838c476148641b63fc61957a6a5f78d3] <==
	I1109 13:44:39.692053       1 serving.go:386] Generated self-signed cert in-memory
	W1109 13:44:41.712695       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 13:44:41.712723       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 13:44:41.712732       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 13:44:41.712738       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 13:44:41.813904       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 13:44:41.813983       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 13:44:41.819415       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:44:41.819568       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:44:41.820003       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 13:44:41.820117       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 13:44:41.921485       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:45:03.884393       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1109 13:45:03.884459       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1109 13:45:03.884484       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1109 13:45:03.884518       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:45:03.884632       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1109 13:45:03.884711       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [eeac0983c075eb11541843961479d16f1ab06804bc32ece9e1a68fd86e0050c7] <==
	I1109 13:46:50.361953       1 serving.go:386] Generated self-signed cert in-memory
	W1109 13:46:51.486251       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 13:46:51.486300       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 13:46:51.486312       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 13:46:51.486319       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 13:46:51.563170       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 13:46:51.565842       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 13:46:51.570632       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 13:46:51.573338       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:46:51.573391       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:46:51.573409       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 13:46:51.674422       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 13:56:47 functional-419649 kubelet[6431]: E1109 13:56:47.091403    6431 manager.go:1116] Failed to create existing container: /kubepods/burstable/podc02798d3a566bdf1b79c9a1609aa8851/crio-7d83e0a07a96bc1167687657e35bdf244ca76065332caccf2cbd9d4fe64ff29c: Error finding container 7d83e0a07a96bc1167687657e35bdf244ca76065332caccf2cbd9d4fe64ff29c: Status 404 returned error can't find the container with id 7d83e0a07a96bc1167687657e35bdf244ca76065332caccf2cbd9d4fe64ff29c
	Nov 09 13:56:47 functional-419649 kubelet[6431]: E1109 13:56:47.093334    6431 manager.go:1116] Failed to create existing container: /kubepods/burstable/podcc2b7dd4-023d-4994-9237-fabeae6e63ce/crio-cbb25c67e227ae6d14f5d31b745da29ae24e0b42896047793ff6795b34dbc3bd: Error finding container cbb25c67e227ae6d14f5d31b745da29ae24e0b42896047793ff6795b34dbc3bd: Status 404 returned error can't find the container with id cbb25c67e227ae6d14f5d31b745da29ae24e0b42896047793ff6795b34dbc3bd
	Nov 09 13:56:47 functional-419649 kubelet[6431]: E1109 13:56:47.094235    6431 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod62b30b4238b2d99ce79cd53f17bb6da4/crio-783c1d6ed75e1986bb80f86c0a3ef9c7b34fe784b7c558e9e71a8dae6e90a890: Error finding container 783c1d6ed75e1986bb80f86c0a3ef9c7b34fe784b7c558e9e71a8dae6e90a890: Status 404 returned error can't find the container with id 783c1d6ed75e1986bb80f86c0a3ef9c7b34fe784b7c558e9e71a8dae6e90a890
	Nov 09 13:56:47 functional-419649 kubelet[6431]: E1109 13:56:47.298198    6431 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762696607297692551  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203239}  inodes_used:{value:105}}"
	Nov 09 13:56:47 functional-419649 kubelet[6431]: E1109 13:56:47.298294    6431 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762696607297692551  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203239}  inodes_used:{value:105}}"
	Nov 09 13:56:49 functional-419649 kubelet[6431]: E1109 13:56:49.124102    6431 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 09 13:56:49 functional-419649 kubelet[6431]: E1109 13:56:49.124170    6431 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 09 13:56:49 functional-419649 kubelet[6431]: E1109 13:56:49.124387    6431 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(050d522c-0b3b-45e6-bcc9-4a75faca154f): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 09 13:56:49 functional-419649 kubelet[6431]: E1109 13:56:49.124417    6431 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="050d522c-0b3b-45e6-bcc9-4a75faca154f"
	Nov 09 13:56:57 functional-419649 kubelet[6431]: E1109 13:56:57.305277    6431 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762696617304870199  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203239}  inodes_used:{value:105}}"
	Nov 09 13:56:57 functional-419649 kubelet[6431]: E1109 13:56:57.305303    6431 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762696617304870199  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203239}  inodes_used:{value:105}}"
	Nov 09 13:56:59 functional-419649 kubelet[6431]: E1109 13:56:59.951461    6431 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\" already exists"
	Nov 09 13:56:59 functional-419649 kubelet[6431]: E1109 13:56:59.951587    6431 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\" already exists" pod="kube-system/kube-proxy-tw9jj"
	Nov 09 13:56:59 functional-419649 kubelet[6431]: E1109 13:56:59.951609    6431 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\" already exists" pod="kube-system/kube-proxy-tw9jj"
	Nov 09 13:56:59 functional-419649 kubelet[6431]: E1109 13:56:59.951667    6431 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-tw9jj_kube-system(64d037d4-fe85-43d4-8322-67e3cf4a7b89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-tw9jj_kube-system(64d037d4-fe85-43d4-8322-67e3cf4a7b89)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\\\" already exists\"" pod="kube-system/kube-proxy-tw9jj" podUID="64d037d4-fe85-43d4-8322-67e3cf4a7b89"
	Nov 09 13:57:04 functional-419649 kubelet[6431]: E1109 13:57:04.935037    6431 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="050d522c-0b3b-45e6-bcc9-4a75faca154f"
	Nov 09 13:57:07 functional-419649 kubelet[6431]: E1109 13:57:07.307654    6431 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762696627307160230  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203239}  inodes_used:{value:105}}"
	Nov 09 13:57:07 functional-419649 kubelet[6431]: E1109 13:57:07.307681    6431 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762696627307160230  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203239}  inodes_used:{value:105}}"
	Nov 09 13:57:10 functional-419649 kubelet[6431]: E1109 13:57:10.946167    6431 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\" already exists"
	Nov 09 13:57:10 functional-419649 kubelet[6431]: E1109 13:57:10.946249    6431 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\" already exists" pod="kube-system/kube-proxy-tw9jj"
	Nov 09 13:57:10 functional-419649 kubelet[6431]: E1109 13:57:10.946301    6431 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\" already exists" pod="kube-system/kube-proxy-tw9jj"
	Nov 09 13:57:10 functional-419649 kubelet[6431]: E1109 13:57:10.946353    6431 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-tw9jj_kube-system(64d037d4-fe85-43d4-8322-67e3cf4a7b89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-tw9jj_kube-system(64d037d4-fe85-43d4-8322-67e3cf4a7b89)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\\\" already exists\"" pod="kube-system/kube-proxy-tw9jj" podUID="64d037d4-fe85-43d4-8322-67e3cf4a7b89"
	Nov 09 13:57:15 functional-419649 kubelet[6431]: E1109 13:57:15.934454    6431 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="050d522c-0b3b-45e6-bcc9-4a75faca154f"
	Nov 09 13:57:17 functional-419649 kubelet[6431]: E1109 13:57:17.309380    6431 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762696637308899366  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203239}  inodes_used:{value:105}}"
	Nov 09 13:57:17 functional-419649 kubelet[6431]: E1109 13:57:17.309415    6431 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762696637308899366  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203239}  inodes_used:{value:105}}"
	
	
	==> storage-provisioner [4ccfd82eb8e555e10a2eafc36a68aa554bade3da97f156fb8739c88f2a38cf0e] <==
	W1109 13:56:52.855409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:54.859335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:54.865843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:56.869394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:56.876015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:58.880199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:56:58.891118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:57:00.896187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:57:00.905972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:57:02.909964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:57:02.917915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:57:04.921403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:57:04.931235       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:57:06.936724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:57:06.948243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:57:08.952511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:57:08.962924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:57:10.966188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:57:10.973526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:57:12.977136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:57:12.982640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:57:14.987052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:57:14.993072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:57:17.003205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:57:17.014991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [6678989530f542c497fe013e487fb90f58d077213ceab56369a11083ac93d8aa] <==
	I1109 13:44:43.443213       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 13:44:43.494214       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 13:44:43.509099       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1109 13:44:43.550008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:44:47.009647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:44:51.271035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:44:54.874686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:44:57.929867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:45:00.953468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:45:00.962351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 13:45:00.962538       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 13:45:00.962713       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"edabc5ba-9ba5-4f59-828d-21dd30bf1c29", APIVersion:"v1", ResourceVersion:"533", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-419649_f071ba0b-f946-440f-ad3d-fe78e643a2d5 became leader
	I1109 13:45:00.962870       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-419649_f071ba0b-f946-440f-ad3d-fe78e643a2d5!
	W1109 13:45:00.967758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:45:00.981678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 13:45:01.063910       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-419649_f071ba0b-f946-440f-ad3d-fe78e643a2d5!
	W1109 13:45:02.986290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:45:02.993756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-419649 -n functional-419649
helpers_test.go:269: (dbg) Run:  kubectl --context functional-419649 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-vgzbw mysql-5bb876957f-2vzsj sp-pod dashboard-metrics-scraper-77bf4d6c4c-nhqbg kubernetes-dashboard-855c9754f9-9z7jc
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-419649 describe pod busybox-mount hello-node-75c85bcc94-vgzbw mysql-5bb876957f-2vzsj sp-pod dashboard-metrics-scraper-77bf4d6c4c-nhqbg kubernetes-dashboard-855c9754f9-9z7jc
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-419649 describe pod busybox-mount hello-node-75c85bcc94-vgzbw mysql-5bb876957f-2vzsj sp-pod dashboard-metrics-scraper-77bf4d6c4c-nhqbg kubernetes-dashboard-855c9754f9-9z7jc: exit status 1 (123.818141ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-419649/192.168.39.90
	Start Time:       Sun, 09 Nov 2025 13:51:16 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.14
	IPs:
	  IP:  10.244.0.14
	Containers:
	  mount-munger:
	    Container ID:  cri-o://a4bfeb1eaf4e15a5f6e37e7754b859374bfe1b44c20eebb4ebbf05b097e18c3c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 09 Nov 2025 13:51:49 +0000
	      Finished:     Sun, 09 Nov 2025 13:51:49 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6p5g6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-6p5g6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  6m1s   default-scheduler  Successfully assigned default/busybox-mount to functional-419649
	  Normal  Pulling    6m1s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m29s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.565s (31.594s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m29s  kubelet            Created container: mount-munger
	  Normal  Started    5m29s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-vgzbw
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-419649/192.168.39.90
	Start Time:       Sun, 09 Nov 2025 13:51:07 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4fzhz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4fzhz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  6m11s  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-vgzbw to functional-419649
	  Normal  Pulling    6m11s  kubelet            Pulling image "kicbase/echo-server"
	  Normal  Pulled     6m10s  kubelet            Successfully pulled image "kicbase/echo-server" in 919ms (919ms including waiting). Image size: 4945246 bytes.
	
	
	Name:             mysql-5bb876957f-2vzsj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-419649/192.168.39.90
	Start Time:       Sun, 09 Nov 2025 13:51:54 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.17
	IPs:
	  IP:           10.244.0.17
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hfrjp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hfrjp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m24s                default-scheduler  Successfully assigned default/mysql-5bb876957f-2vzsj to functional-419649
	  Warning  Failed     3m45s                kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     59s (x2 over 3m45s)  kubelet            Error: ErrImagePull
	  Warning  Failed     59s                  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    45s (x2 over 3m45s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     45s (x2 over 3m45s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    33s (x3 over 5m23s)  kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-419649/192.168.39.90
	Start Time:       Sun, 09 Nov 2025 13:51:15 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:  10.244.0.13
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9sr2c (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-9sr2c:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m3s                  default-scheduler  Successfully assigned default/sp-pod to functional-419649
	  Warning  Failed     2m45s                 kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m20s (x3 over 6m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     29s (x2 over 5m32s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     29s (x3 over 5m32s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    3s (x4 over 5m31s)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     3s (x4 over 5m31s)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-nhqbg" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-9z7jc" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-419649 describe pod busybox-mount hello-node-75c85bcc94-vgzbw mysql-5bb876957f-2vzsj sp-pod dashboard-metrics-scraper-77bf4d6c4c-nhqbg kubernetes-dashboard-855c9754f9-9z7jc: exit status 1
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (371.09s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (603.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-419649 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
E1109 13:51:55.087244  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "mysql-5bb876957f-2vzsj" [33dac2a4-0080-4332-bb7e-013b368be634] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E1109 13:56:27.379176  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-419649 -n functional-419649
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-11-09 14:01:54.755189915 +0000 UTC m=+1985.477664314
functional_test.go:1804: (dbg) Run:  kubectl --context functional-419649 describe po mysql-5bb876957f-2vzsj -n default
functional_test.go:1804: (dbg) kubectl --context functional-419649 describe po mysql-5bb876957f-2vzsj -n default:
Name:             mysql-5bb876957f-2vzsj
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-419649/192.168.39.90
Start Time:       Sun, 09 Nov 2025 13:51:54 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.17
IPs:
IP:           10.244.0.17
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hfrjp (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-hfrjp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-2vzsj to functional-419649
Warning  Failed     5m35s                  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m5s (x2 over 8m21s)   kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m5s (x3 over 8m21s)   kubelet            Error: ErrImagePull
Normal   BackOff    2m36s (x4 over 8m21s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     2m36s (x4 over 8m21s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    2m21s (x4 over 9m59s)  kubelet            Pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-419649 logs mysql-5bb876957f-2vzsj -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-419649 logs mysql-5bb876957f-2vzsj -n default: exit status 1 (79.509514ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-2vzsj" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-419649 logs mysql-5bb876957f-2vzsj -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-419649 -n functional-419649
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-419649 logs -n 25: (1.982522902s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                       ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-419649 ssh -- ls -la /mount-9p                                                                         │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ ssh            │ functional-419649 ssh sudo umount -f /mount-9p                                                                    │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ ssh            │ functional-419649 ssh findmnt -T /mount1                                                                          │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ mount          │ -p functional-419649 /tmp/TestFunctionalparallelMountCmdVerifyCleanup522304696/001:/mount3 --alsologtostderr -v=1 │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ mount          │ -p functional-419649 /tmp/TestFunctionalparallelMountCmdVerifyCleanup522304696/001:/mount1 --alsologtostderr -v=1 │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ mount          │ -p functional-419649 /tmp/TestFunctionalparallelMountCmdVerifyCleanup522304696/001:/mount2 --alsologtostderr -v=1 │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ ssh            │ functional-419649 ssh findmnt -T /mount1                                                                          │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ ssh            │ functional-419649 ssh findmnt -T /mount2                                                                          │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ ssh            │ functional-419649 ssh findmnt -T /mount3                                                                          │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │ 09 Nov 25 13:51 UTC │
	│ mount          │ -p functional-419649 --kill=true                                                                                  │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:51 UTC │                     │
	│ update-context │ functional-419649 update-context --alsologtostderr -v=2                                                           │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:56 UTC │ 09 Nov 25 13:56 UTC │
	│ update-context │ functional-419649 update-context --alsologtostderr -v=2                                                           │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:56 UTC │ 09 Nov 25 13:56 UTC │
	│ update-context │ functional-419649 update-context --alsologtostderr -v=2                                                           │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:56 UTC │ 09 Nov 25 13:56 UTC │
	│ image          │ functional-419649 image ls --format short --alsologtostderr                                                       │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:56 UTC │ 09 Nov 25 13:56 UTC │
	│ image          │ functional-419649 image ls --format yaml --alsologtostderr                                                        │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:56 UTC │ 09 Nov 25 13:56 UTC │
	│ ssh            │ functional-419649 ssh pgrep buildkitd                                                                             │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:56 UTC │                     │
	│ image          │ functional-419649 image build -t localhost/my-image:functional-419649 testdata/build --alsologtostderr            │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:56 UTC │ 09 Nov 25 13:56 UTC │
	│ image          │ functional-419649 image ls                                                                                        │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:56 UTC │ 09 Nov 25 13:56 UTC │
	│ image          │ functional-419649 image ls --format json --alsologtostderr                                                        │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:56 UTC │ 09 Nov 25 13:56 UTC │
	│ image          │ functional-419649 image ls --format table --alsologtostderr                                                       │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 13:56 UTC │ 09 Nov 25 13:56 UTC │
	│ service        │ functional-419649 service list                                                                                    │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 14:01 UTC │ 09 Nov 25 14:01 UTC │
	│ service        │ functional-419649 service list -o json                                                                            │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 14:01 UTC │ 09 Nov 25 14:01 UTC │
	│ service        │ functional-419649 service --namespace=default --https --url hello-node                                            │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 14:01 UTC │                     │
	│ service        │ functional-419649 service hello-node --url --format={{.IP}}                                                       │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 14:01 UTC │                     │
	│ service        │ functional-419649 service hello-node --url                                                                        │ functional-419649 │ jenkins │ v1.37.0 │ 09 Nov 25 14:01 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 13:51:39
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 13:51:39.860979  561575 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:51:39.861103  561575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:51:39.861110  561575 out.go:374] Setting ErrFile to fd 2...
	I1109 13:51:39.861115  561575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:51:39.861532  561575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
	I1109 13:51:39.862180  561575 out.go:368] Setting JSON to false
	I1109 13:51:39.863220  561575 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":70449,"bootTime":1762625851,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 13:51:39.863355  561575 start.go:143] virtualization: kvm guest
	I1109 13:51:39.865116  561575 out.go:179] * [functional-419649] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1109 13:51:39.866506  561575 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 13:51:39.866542  561575 notify.go:221] Checking for updates...
	I1109 13:51:39.869030  561575 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:51:39.870218  561575 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-549598/kubeconfig
	I1109 13:51:39.871342  561575 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 13:51:39.872675  561575 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 13:51:39.873970  561575 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 13:51:39.875604  561575 config.go:182] Loaded profile config "functional-419649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:51:39.876177  561575 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:51:39.915932  561575 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1109 13:51:39.917245  561575 start.go:309] selected driver: kvm2
	I1109 13:51:39.917274  561575 start.go:930] validating driver "kvm2" against &{Name:functional-419649 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-419649 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:51:39.917426  561575 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 13:51:39.919670  561575 out.go:203] 
	W1109 13:51:39.920739  561575 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1109 13:51:39.921941  561575 out.go:203] 
	
	
	==> CRI-O <==
	Nov 09 14:01:55 functional-419649 crio[6056]: time="2025-11-09 14:01:55.804147568Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762696915804118884,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203239,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=022d2cf0-9602-40e7-ac25-d5e1c55ac577 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 14:01:55 functional-419649 crio[6056]: time="2025-11-09 14:01:55.805434175Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=379c7246-d9b4-4692-90c7-1bea20f650f7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 14:01:55 functional-419649 crio[6056]: time="2025-11-09 14:01:55.805521790Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=379c7246-d9b4-4692-90c7-1bea20f650f7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 14:01:55 functional-419649 crio[6056]: time="2025-11-09 14:01:55.805952060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4bfeb1eaf4e15a5f6e37e7754b859374bfe1b44c20eebb4ebbf05b097e18c3c,PodSandboxId:a1a33c028bd43eb4e7eb2debd0b100c09d8c5ea5c4e6cb4a5f501c8e57bd2b9f,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1762696309056036951,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a19c92c2-78f7-4060-ac8a-b2554d1b04cb,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e33bb0e4b8eb41966938874d3c927e9012ef3133494f78fc3a0915e652a8cc,PodSandboxId:b4f6bbf257db3b8fbac0994f6910a975771946cefa56c4fcedb8bc1329d92bd5,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1762696269236248644,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-7q2d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2f5d2732-44a0-47cb-8c9d-f5e4b42d8bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2f786eb6996484feae9a6edf630d709e85075222b420eeacbd4f5bcdf42fe9,PodSandboxId:d343fe3deeb567979594b26e1177a2a5ab212e7e352b36c5d4d62140943587d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762696013461666904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wkwss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc2b7dd4-023d-4994-9237-fabeae6e63ce,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7de985bd6dba90bd0013a202ba648a3374b5e13c9eac145c1ff3a556ad18b11,PodSandboxId:55a203c6316403aa49a2955a0853041d78c5074838c3bb8ebaa82188ae2d598d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:176269601345111480
5,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zrw7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9011e98a-2a19-48e0-8e28-8bddfcffc50c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccfd82eb8e555e10a2eafc36a68aa554bade3da97f156fb8739c88f2a38cf0e,PodSandboxId:17039e2b70c72e705a132625c23567c3638c1b25f3af9b63ed1849d52da
a2b63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762696012872773465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae48a075-3b00-486c-b8b2-6b2080262987,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7004426713a395e63c67bf1a805682492109c2bf495d3246fcb0f28c29099d7,PodSandboxId:9b614ffdc2151d1580f4fc9869a755d2e41aff7bd3c05802acf032de23b91e7c,Metada
ta:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762696008010989457,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02798d3a566bdf1b79c9a1609aa8851,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeac0983c075eb11541843961479d16f1ab06804bc32ece9e1a68fd86e0050c7,PodSan
dboxId:c63f4500183b22de7cb690d64315a984561f054ec009c59cdf26b6427d6060b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762696008044381598,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274cf2193394c035a9ce4fd611eef33b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:f301eddabdc4748ec29a00b5d541088744c5ab8cc5777cea1ba92eb84008e179,PodSandboxId:b5a20367f799b5092d5b4a95d5392074f211426e75222c44241f261f7b05e763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762696007936737203,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b30b4238b2d99ce79cd53f17bb6da4,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd12d128087ae793d7d792e812c6f6ccca052788ddf81a1e90ea30b64ab4849,PodSandboxId:6bda3fdf294e3217b2c066d240d843a37e0880dcedbf78cb455f103455016be0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762696007910619648,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4c301aa576e79baa0710f6d51bb504,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostP
ort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c23c19796791474d9cbefd6426cedf9b06ba08c535f77b519664e5d5fe0e1da,PodSandboxId:cbb25c67e227ae6d14f5d31b745da29ae24e0b42896047793ff6795b34dbc3bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762695883942673163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wkwss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc2b7dd4-023d-4994-9237-fabeae6e63ce,},Annotations:map[string]string{io.kubernetes.
container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dabded828a263500488aec4eed71cc058df72adef2bcdb48a96af101f8296650,PodSandboxId:8575db975cef01eb89fd28f03c6fa8fdca33842a2041979b632356575beb2d34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa
6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762695883798388910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zrw7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9011e98a-2a19-48e0-8e28-8bddfcffc50c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ec1c8b227e612c946139a2114c20f15d630c80f99ada219f51c961
793c721b,PodSandboxId:209b0ec75c725d6d45ec9e42a7ba7a936de6699df431c66f39fcb3aec80a1ab1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1762695883066023200,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tw9jj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d037d4-fe85-43d4-8322-67e3cf4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6678989530f542c497fe013e487fb90f58d077213ceab56369a11083ac93d8aa,PodSandboxId:8cbe5cd0f783
4ad7420d084807c5915acd62cf10d4ddf89d1d29edf6f0936966,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1762695883122410369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae48a075-3b00-486c-b8b2-6b2080262987,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a05faf8a4dc9342caa1ccb9a3986f838c476148641b63fc61957a6a5f78d3,PodSandboxId:99b115d10596e4e2acea4b39e
1c2773d555e5bcd31c5829f19e770b5d584ef64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1762695878365041555,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274cf2193394c035a9ce4fd611eef33b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:9251976d9d7b91c3dd81cc2547a4867d13c44581df4bf007267d504fee90a86e,PodSandboxId:7d83e0a07a96bc1167687657e35bdf244ca76065332caccf2cbd9d4fe64ff29c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1762695878239061096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02798d3a566bdf1b79c9a1609aa8851,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termination
MessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c5b8301c397c59a54b37b9851ca454244b8fbec40b66bf02f5df0283be65347,PodSandboxId:783c1d6ed75e1986bb80f86c0a3ef9c7b34fe784b7c558e9e71a8dae6e90a890,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1762695878231412184,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b30b4238b2d99ce79cd53f17bb6da4,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=379c7246-d9b4-4692-90c7-1bea20f650f7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 14:01:55 functional-419649 crio[6056]: time="2025-11-09 14:01:55.851325635Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=65275abc-918d-4300-a3c6-db68eb61f297 name=/runtime.v1.RuntimeService/Version
	Nov 09 14:01:55 functional-419649 crio[6056]: time="2025-11-09 14:01:55.851557607Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=65275abc-918d-4300-a3c6-db68eb61f297 name=/runtime.v1.RuntimeService/Version
	Nov 09 14:01:55 functional-419649 crio[6056]: time="2025-11-09 14:01:55.855714552Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9520916d-08b9-4837-9bf6-a7dc5a05ee82 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 14:01:55 functional-419649 crio[6056]: time="2025-11-09 14:01:55.857920465Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762696915857881765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203239,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9520916d-08b9-4837-9bf6-a7dc5a05ee82 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 14:01:55 functional-419649 crio[6056]: time="2025-11-09 14:01:55.858897227Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=397e2338-3cf7-42ad-9911-24f3a913ac49 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 14:01:55 functional-419649 crio[6056]: time="2025-11-09 14:01:55.858963390Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=397e2338-3cf7-42ad-9911-24f3a913ac49 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 14:01:55 functional-419649 crio[6056]: time="2025-11-09 14:01:55.859569778Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4bfeb1eaf4e15a5f6e37e7754b859374bfe1b44c20eebb4ebbf05b097e18c3c,PodSandboxId:a1a33c028bd43eb4e7eb2debd0b100c09d8c5ea5c4e6cb4a5f501c8e57bd2b9f,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1762696309056036951,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a19c92c2-78f7-4060-ac8a-b2554d1b04cb,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e33bb0e4b8eb41966938874d3c927e9012ef3133494f78fc3a0915e652a8cc,PodSandboxId:b4f6bbf257db3b8fbac0994f6910a975771946cefa56c4fcedb8bc1329d92bd5,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1762696269236248644,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-7q2d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2f5d2732-44a0-47cb-8c9d-f5e4b42d8bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2f786eb6996484feae9a6edf630d709e85075222b420eeacbd4f5bcdf42fe9,PodSandboxId:d343fe3deeb567979594b26e1177a2a5ab212e7e352b36c5d4d62140943587d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762696013461666904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wkwss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc2b7dd4-023d-4994-9237-fabeae6e63ce,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7de985bd6dba90bd0013a202ba648a3374b5e13c9eac145c1ff3a556ad18b11,PodSandboxId:55a203c6316403aa49a2955a0853041d78c5074838c3bb8ebaa82188ae2d598d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:176269601345111480
5,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zrw7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9011e98a-2a19-48e0-8e28-8bddfcffc50c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccfd82eb8e555e10a2eafc36a68aa554bade3da97f156fb8739c88f2a38cf0e,PodSandboxId:17039e2b70c72e705a132625c23567c3638c1b25f3af9b63ed1849d52da
a2b63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762696012872773465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae48a075-3b00-486c-b8b2-6b2080262987,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7004426713a395e63c67bf1a805682492109c2bf495d3246fcb0f28c29099d7,PodSandboxId:9b614ffdc2151d1580f4fc9869a755d2e41aff7bd3c05802acf032de23b91e7c,Metada
ta:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762696008010989457,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02798d3a566bdf1b79c9a1609aa8851,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeac0983c075eb11541843961479d16f1ab06804bc32ece9e1a68fd86e0050c7,PodSan
dboxId:c63f4500183b22de7cb690d64315a984561f054ec009c59cdf26b6427d6060b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762696008044381598,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274cf2193394c035a9ce4fd611eef33b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:f301eddabdc4748ec29a00b5d541088744c5ab8cc5777cea1ba92eb84008e179,PodSandboxId:b5a20367f799b5092d5b4a95d5392074f211426e75222c44241f261f7b05e763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762696007936737203,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b30b4238b2d99ce79cd53f17bb6da4,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd12d128087ae793d7d792e812c6f6ccca052788ddf81a1e90ea30b64ab4849,PodSandboxId:6bda3fdf294e3217b2c066d240d843a37e0880dcedbf78cb455f103455016be0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762696007910619648,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4c301aa576e79baa0710f6d51bb504,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostP
ort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c23c19796791474d9cbefd6426cedf9b06ba08c535f77b519664e5d5fe0e1da,PodSandboxId:cbb25c67e227ae6d14f5d31b745da29ae24e0b42896047793ff6795b34dbc3bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762695883942673163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wkwss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc2b7dd4-023d-4994-9237-fabeae6e63ce,},Annotations:map[string]string{io.kubernetes.
container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dabded828a263500488aec4eed71cc058df72adef2bcdb48a96af101f8296650,PodSandboxId:8575db975cef01eb89fd28f03c6fa8fdca33842a2041979b632356575beb2d34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa
6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762695883798388910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zrw7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9011e98a-2a19-48e0-8e28-8bddfcffc50c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ec1c8b227e612c946139a2114c20f15d630c80f99ada219f51c961
793c721b,PodSandboxId:209b0ec75c725d6d45ec9e42a7ba7a936de6699df431c66f39fcb3aec80a1ab1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1762695883066023200,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tw9jj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d037d4-fe85-43d4-8322-67e3cf4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6678989530f542c497fe013e487fb90f58d077213ceab56369a11083ac93d8aa,PodSandboxId:8cbe5cd0f783
4ad7420d084807c5915acd62cf10d4ddf89d1d29edf6f0936966,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1762695883122410369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae48a075-3b00-486c-b8b2-6b2080262987,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a05faf8a4dc9342caa1ccb9a3986f838c476148641b63fc61957a6a5f78d3,PodSandboxId:99b115d10596e4e2acea4b39e
1c2773d555e5bcd31c5829f19e770b5d584ef64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1762695878365041555,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274cf2193394c035a9ce4fd611eef33b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:9251976d9d7b91c3dd81cc2547a4867d13c44581df4bf007267d504fee90a86e,PodSandboxId:7d83e0a07a96bc1167687657e35bdf244ca76065332caccf2cbd9d4fe64ff29c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1762695878239061096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02798d3a566bdf1b79c9a1609aa8851,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termination
MessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c5b8301c397c59a54b37b9851ca454244b8fbec40b66bf02f5df0283be65347,PodSandboxId:783c1d6ed75e1986bb80f86c0a3ef9c7b34fe784b7c558e9e71a8dae6e90a890,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1762695878231412184,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b30b4238b2d99ce79cd53f17bb6da4,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=397e2338-3cf7-42ad-9911-24f3a913ac49 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 14:01:55 functional-419649 crio[6056]: time="2025-11-09 14:01:55.911954235Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2ebbf65c-45fe-487c-b375-260e50782e1c name=/runtime.v1.RuntimeService/Version
	Nov 09 14:01:55 functional-419649 crio[6056]: time="2025-11-09 14:01:55.912056060Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2ebbf65c-45fe-487c-b375-260e50782e1c name=/runtime.v1.RuntimeService/Version
	Nov 09 14:01:55 functional-419649 crio[6056]: time="2025-11-09 14:01:55.914474767Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d9340c08-516b-444d-a619-b7cea64a57cb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 14:01:55 functional-419649 crio[6056]: time="2025-11-09 14:01:55.915215499Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762696915915185432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203239,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9340c08-516b-444d-a619-b7cea64a57cb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 14:01:55 functional-419649 crio[6056]: time="2025-11-09 14:01:55.916266701Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41250a6c-9368-4eb7-b7e8-fa1ce3b205a9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 14:01:55 functional-419649 crio[6056]: time="2025-11-09 14:01:55.916333889Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41250a6c-9368-4eb7-b7e8-fa1ce3b205a9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 14:01:55 functional-419649 crio[6056]: time="2025-11-09 14:01:55.916708576Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4bfeb1eaf4e15a5f6e37e7754b859374bfe1b44c20eebb4ebbf05b097e18c3c,PodSandboxId:a1a33c028bd43eb4e7eb2debd0b100c09d8c5ea5c4e6cb4a5f501c8e57bd2b9f,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1762696309056036951,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a19c92c2-78f7-4060-ac8a-b2554d1b04cb,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e33bb0e4b8eb41966938874d3c927e9012ef3133494f78fc3a0915e652a8cc,PodSandboxId:b4f6bbf257db3b8fbac0994f6910a975771946cefa56c4fcedb8bc1329d92bd5,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1762696269236248644,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-7q2d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2f5d2732-44a0-47cb-8c9d-f5e4b42d8bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df2f786eb6996484feae9a6edf630d709e85075222b420eeacbd4f5bcdf42fe9,PodSandboxId:d343fe3deeb567979594b26e1177a2a5ab212e7e352b36c5d4d62140943587d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762696013461666904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wkwss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc2b7dd4-023d-4994-9237-fabeae6e63ce,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7de985bd6dba90bd0013a202ba648a3374b5e13c9eac145c1ff3a556ad18b11,PodSandboxId:55a203c6316403aa49a2955a0853041d78c5074838c3bb8ebaa82188ae2d598d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:176269601345111480
5,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zrw7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9011e98a-2a19-48e0-8e28-8bddfcffc50c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccfd82eb8e555e10a2eafc36a68aa554bade3da97f156fb8739c88f2a38cf0e,PodSandboxId:17039e2b70c72e705a132625c23567c3638c1b25f3af9b63ed1849d52da
a2b63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762696012872773465,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae48a075-3b00-486c-b8b2-6b2080262987,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7004426713a395e63c67bf1a805682492109c2bf495d3246fcb0f28c29099d7,PodSandboxId:9b614ffdc2151d1580f4fc9869a755d2e41aff7bd3c05802acf032de23b91e7c,Metada
ta:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762696008010989457,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02798d3a566bdf1b79c9a1609aa8851,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeac0983c075eb11541843961479d16f1ab06804bc32ece9e1a68fd86e0050c7,PodSan
dboxId:c63f4500183b22de7cb690d64315a984561f054ec009c59cdf26b6427d6060b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762696008044381598,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274cf2193394c035a9ce4fd611eef33b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:f301eddabdc4748ec29a00b5d541088744c5ab8cc5777cea1ba92eb84008e179,PodSandboxId:b5a20367f799b5092d5b4a95d5392074f211426e75222c44241f261f7b05e763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762696007936737203,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b30b4238b2d99ce79cd53f17bb6da4,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efd12d128087ae793d7d792e812c6f6ccca052788ddf81a1e90ea30b64ab4849,PodSandboxId:6bda3fdf294e3217b2c066d240d843a37e0880dcedbf78cb455f103455016be0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762696007910619648,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4c301aa576e79baa0710f6d51bb504,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostP
ort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c23c19796791474d9cbefd6426cedf9b06ba08c535f77b519664e5d5fe0e1da,PodSandboxId:cbb25c67e227ae6d14f5d31b745da29ae24e0b42896047793ff6795b34dbc3bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762695883942673163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wkwss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc2b7dd4-023d-4994-9237-fabeae6e63ce,},Annotations:map[string]string{io.kubernetes.
container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dabded828a263500488aec4eed71cc058df72adef2bcdb48a96af101f8296650,PodSandboxId:8575db975cef01eb89fd28f03c6fa8fdca33842a2041979b632356575beb2d34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa
6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1762695883798388910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zrw7g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9011e98a-2a19-48e0-8e28-8bddfcffc50c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ec1c8b227e612c946139a2114c20f15d630c80f99ada219f51c961
793c721b,PodSandboxId:209b0ec75c725d6d45ec9e42a7ba7a936de6699df431c66f39fcb3aec80a1ab1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1762695883066023200,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tw9jj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d037d4-fe85-43d4-8322-67e3cf4a7b89,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6678989530f542c497fe013e487fb90f58d077213ceab56369a11083ac93d8aa,PodSandboxId:8cbe5cd0f783
4ad7420d084807c5915acd62cf10d4ddf89d1d29edf6f0936966,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1762695883122410369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae48a075-3b00-486c-b8b2-6b2080262987,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a05faf8a4dc9342caa1ccb9a3986f838c476148641b63fc61957a6a5f78d3,PodSandboxId:99b115d10596e4e2acea4b39e
1c2773d555e5bcd31c5829f19e770b5d584ef64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1762695878365041555,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 274cf2193394c035a9ce4fd611eef33b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:9251976d9d7b91c3dd81cc2547a4867d13c44581df4bf007267d504fee90a86e,PodSandboxId:7d83e0a07a96bc1167687657e35bdf244ca76065332caccf2cbd9d4fe64ff29c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1762695878239061096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02798d3a566bdf1b79c9a1609aa8851,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termination
MessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c5b8301c397c59a54b37b9851ca454244b8fbec40b66bf02f5df0283be65347,PodSandboxId:783c1d6ed75e1986bb80f86c0a3ef9c7b34fe784b7c558e9e71a8dae6e90a890,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1762695878231412184,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-419649,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62b30b4238b2d99ce79cd53f17bb6da4,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41250a6c-9368-4eb7-b7e8-fa1ce3b205a9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 14:01:55 functional-419649 crio[6056]: time="2025-11-09 14:01:55.934993363Z" level=debug msg="Request: &ImageStatusRequest{Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c,Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-09T13:51:41.261010416Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},UserSpecifiedImage:,RuntimeHandler:,},Verbose:false,}" file="otel-collector/interceptors.go:62" id=efe23cd8-76de-4e66-a53b-d1bb656073c0 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:01:55 functional-419649 crio[6056]: time="2025-11-09 14:01:55.935103271Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" file="server/image_status.go:27" id=efe23cd8-76de-4e66-a53b-d1bb656073c0 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:01:55 functional-419649 crio[6056]: time="2025-11-09 14:01:55.935285454Z" level=debug msg="reference \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" does not resolve to an image ID" file="storage/storage_reference.go:149"
	Nov 09 14:01:55 functional-419649 crio[6056]: time="2025-11-09 14:01:55.935423451Z" level=debug msg="Can't find docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" file="server/image_status.go:97" id=efe23cd8-76de-4e66-a53b-d1bb656073c0 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:01:55 functional-419649 crio[6056]: time="2025-11-09 14:01:55.935466550Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" file="server/image_status.go:111" id=efe23cd8-76de-4e66-a53b-d1bb656073c0 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:01:55 functional-419649 crio[6056]: time="2025-11-09 14:01:55.935490338Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" file="server/image_status.go:33" id=efe23cd8-76de-4e66-a53b-d1bb656073c0 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:01:55 functional-419649 crio[6056]: time="2025-11-09 14:01:55.935527264Z" level=debug msg="Response: &ImageStatusResponse{Image:nil,Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=efe23cd8-76de-4e66-a53b-d1bb656073c0 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a4bfeb1eaf4e1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e     10 minutes ago      Exited              mount-munger              0                   a1a33c028bd43       busybox-mount
	92e33bb0e4b8e       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6   10 minutes ago      Running             echo-server               0                   b4f6bbf257db3       hello-node-connect-7d85dfc575-7q2d9
	df2f786eb6996       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        15 minutes ago      Running             coredns                   2                   d343fe3deeb56       coredns-66bc5c9577-wkwss
	b7de985bd6dba       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        15 minutes ago      Running             coredns                   2                   55a203c631640       coredns-66bc5c9577-zrw7g
	4ccfd82eb8e55       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                        15 minutes ago      Running             storage-provisioner       3                   17039e2b70c72       storage-provisioner
	eeac0983c075e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                        15 minutes ago      Running             kube-scheduler            2                   c63f4500183b2       kube-scheduler-functional-419649
	a7004426713a3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                        15 minutes ago      Running             etcd                      2                   9b614ffdc2151       etcd-functional-419649
	f301eddabdc47       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                        15 minutes ago      Running             kube-controller-manager   2                   b5a20367f799b       kube-controller-manager-functional-419649
	efd12d128087a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                        15 minutes ago      Running             kube-apiserver            0                   6bda3fdf294e3       kube-apiserver-functional-419649
	5c23c19796791       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        17 minutes ago      Exited              coredns                   1                   cbb25c67e227a       coredns-66bc5c9577-wkwss
	dabded828a263       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        17 minutes ago      Exited              coredns                   1                   8575db975cef0       coredns-66bc5c9577-zrw7g
	6678989530f54       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                        17 minutes ago      Exited              storage-provisioner       2                   8cbe5cd0f7834       storage-provisioner
	a7ec1c8b227e6       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                        17 minutes ago      Exited              kube-proxy                1                   209b0ec75c725       kube-proxy-tw9jj
	455a05faf8a4d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                        17 minutes ago      Exited              kube-scheduler            1                   99b115d10596e       kube-scheduler-functional-419649
	9251976d9d7b9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                        17 minutes ago      Exited              etcd                      1                   7d83e0a07a96b       etcd-functional-419649
	3c5b8301c397c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                        17 minutes ago      Exited              kube-controller-manager   1                   783c1d6ed75e1       kube-controller-manager-functional-419649
	
	
	==> coredns [5c23c19796791474d9cbefd6426cedf9b06ba08c535f77b519664e5d5fe0e1da] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44800 - 53651 "HINFO IN 1400593764380402781.8492229270877848689. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025331213s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b7de985bd6dba90bd0013a202ba648a3374b5e13c9eac145c1ff3a556ad18b11] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57144 - 62700 "HINFO IN 7058232511171921965.4871291774030092335. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.426433128s
	
	
	==> coredns [dabded828a263500488aec4eed71cc058df72adef2bcdb48a96af101f8296650] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49606 - 37184 "HINFO IN 9083893957874782947.4427150369312914047. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.433296592s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [df2f786eb6996484feae9a6edf630d709e85075222b420eeacbd4f5bcdf42fe9] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45303 - 52022 "HINFO IN 6137322651211618462.5033134956876486643. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.038624673s
	
	
	==> describe nodes <==
	Name:               functional-419649
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-419649
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=functional-419649
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T13_43_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:43:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-419649
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:01:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 13:57:22 +0000   Sun, 09 Nov 2025 13:43:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 13:57:22 +0000   Sun, 09 Nov 2025 13:43:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 13:57:22 +0000   Sun, 09 Nov 2025 13:43:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 13:57:22 +0000   Sun, 09 Nov 2025 13:43:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.90
	  Hostname:    functional-419649
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 f58fffb24b824db893edaea13eb8cd34
	  System UUID:                f58fffb2-4b82-4db8-93ed-aea13eb8cd34
	  Boot ID:                    10be153f-12a9-4056-a1dd-41beb5dacdf5
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-vgzbw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-7q2d9           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-2vzsj                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-wkwss                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     18m
	  kube-system                 coredns-66bc5c9577-zrw7g                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     18m
	  kube-system                 etcd-functional-419649                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         18m
	  kube-system                 kube-apiserver-functional-419649              250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-functional-419649     200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-tw9jj                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-functional-419649              100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-nhqbg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9z7jc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (72%)  700m (35%)
	  memory             752Mi (19%)  1040Mi (26%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x3 over 18m)  kubelet          Node functional-419649 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x3 over 18m)  kubelet          Node functional-419649 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x3 over 18m)  kubelet          Node functional-419649 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     18m                kubelet          Node functional-419649 status is now: NodeHasSufficientPID
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m                kubelet          Node functional-419649 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m                kubelet          Node functional-419649 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                18m                kubelet          Node functional-419649 status is now: NodeReady
	  Normal  RegisteredNode           18m                node-controller  Node functional-419649 event: Registered Node functional-419649 in Controller
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node functional-419649 status is now: NodeHasSufficientMemory
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node functional-419649 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node functional-419649 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17m                node-controller  Node functional-419649 event: Registered Node functional-419649 in Controller
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node functional-419649 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node functional-419649 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node functional-419649 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                node-controller  Node functional-419649 event: Registered Node functional-419649 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.100058] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.121158] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.109438] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.192670] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.028077] kauditd_printk_skb: 18 callbacks suppressed
	[Nov 9 13:44] kauditd_printk_skb: 249 callbacks suppressed
	[  +5.534068] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.229697] kauditd_printk_skb: 78 callbacks suppressed
	[  +1.140310] kauditd_printk_skb: 57 callbacks suppressed
	[  +0.132391] kauditd_printk_skb: 194 callbacks suppressed
	[ +11.536468] kauditd_printk_skb: 116 callbacks suppressed
	[Nov 9 13:45] kauditd_printk_skb: 12 callbacks suppressed
	[Nov 9 13:46] kauditd_printk_skb: 263 callbacks suppressed
	[  +0.942624] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.198959] kauditd_printk_skb: 150 callbacks suppressed
	[Nov 9 13:47] kauditd_printk_skb: 125 callbacks suppressed
	[Nov 9 13:51] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.083221] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.000364] kauditd_printk_skb: 78 callbacks suppressed
	[  +0.000133] kauditd_printk_skb: 62 callbacks suppressed
	[  +7.412474] kauditd_printk_skb: 62 callbacks suppressed
	[  +0.000033] kauditd_printk_skb: 25 callbacks suppressed
	[Nov 9 13:56] crun[10355]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +3.099297] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [9251976d9d7b91c3dd81cc2547a4867d13c44581df4bf007267d504fee90a86e] <==
	{"level":"warn","ts":"2025-11-09T13:44:40.717700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:44:40.747628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:44:40.762504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:44:40.787093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:44:40.809876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:44:40.833047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:44:40.947438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45540","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-09T13:45:03.891974Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-09T13:45:03.892178Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-419649","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.90:2380"],"advertise-client-urls":["https://192.168.39.90:2379"]}
	{"level":"error","ts":"2025-11-09T13:45:03.892264Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-09T13:45:03.892316Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-09T13:45:03.971568Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T13:45:03.971624Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8d381aaacda0b9bd","current-leader-member-id":"8d381aaacda0b9bd"}
	{"level":"warn","ts":"2025-11-09T13:45:03.971649Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-09T13:45:03.971856Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-09T13:45:03.971872Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T13:45:03.971893Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-09T13:45:03.971779Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-09T13:45:03.971967Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.90:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-09T13:45:03.971979Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.90:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-09T13:45:03.971986Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.90:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T13:45:03.975952Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.90:2380"}
	{"level":"error","ts":"2025-11-09T13:45:03.976056Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.90:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T13:45:03.976084Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.90:2380"}
	{"level":"info","ts":"2025-11-09T13:45:03.976113Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-419649","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.90:2380"],"advertise-client-urls":["https://192.168.39.90:2379"]}
	
	
	==> etcd [a7004426713a395e63c67bf1a805682492109c2bf495d3246fcb0f28c29099d7] <==
	{"level":"warn","ts":"2025-11-09T13:46:50.296069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.315968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.326763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.340581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.359060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.373234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.389155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.403233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.419191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.445351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.477707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.487681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.498150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.512979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.520588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.532487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.546948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.557302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:46:50.616155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54108","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-09T13:56:49.442324Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1072}
	{"level":"info","ts":"2025-11-09T13:56:49.476661Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1072,"took":"33.865086ms","hash":4179198837,"current-db-size-bytes":3330048,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1560576,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-11-09T13:56:49.476717Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4179198837,"revision":1072,"compact-revision":-1}
	{"level":"info","ts":"2025-11-09T14:01:49.451681Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1396}
	{"level":"info","ts":"2025-11-09T14:01:49.457323Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1396,"took":"5.168308ms","hash":710524442,"current-db-size-bytes":3330048,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":2228224,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-11-09T14:01:49.457391Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":710524442,"revision":1396,"compact-revision":1072}
	
	
	==> kernel <==
	 14:01:56 up 19 min,  0 users,  load average: 0.29, 0.26, 0.20
	Linux functional-419649 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Nov  1 20:49:51 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [efd12d128087ae793d7d792e812c6f6ccca052788ddf81a1e90ea30b64ab4849] <==
	I1109 13:46:51.566773       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1109 13:46:51.569360       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1109 13:46:51.569444       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1109 13:46:51.569452       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1109 13:46:51.571432       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1109 13:46:51.571516       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 13:46:51.583391       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1109 13:46:51.608585       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1109 13:46:51.898682       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 13:46:52.389190       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 13:46:54.236687       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 13:46:54.293952       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1109 13:46:54.332997       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 13:46:54.343624       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 13:46:56.099629       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 13:46:56.298692       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 13:46:56.399527       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 13:51:02.151207       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.49.22"}
	I1109 13:51:07.123619       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.49.89"}
	I1109 13:51:08.337891       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.221.72"}
	I1109 13:51:40.973107       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 13:51:41.359605       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.76.215"}
	I1109 13:51:41.382444       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.72.212"}
	I1109 13:51:54.435031       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.98.201.164"}
	I1109 13:56:51.482896       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [3c5b8301c397c59a54b37b9851ca454244b8fbec40b66bf02f5df0283be65347] <==
	I1109 13:44:46.211484       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1109 13:44:46.211674       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1109 13:44:46.211775       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-419649"
	I1109 13:44:46.211938       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1109 13:44:46.220947       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1109 13:44:46.224727       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1109 13:44:46.224832       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1109 13:44:46.226268       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1109 13:44:46.226303       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1109 13:44:46.226310       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1109 13:44:46.226317       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1109 13:44:46.229235       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1109 13:44:46.231468       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1109 13:44:46.233055       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1109 13:44:46.236148       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1109 13:44:46.242644       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1109 13:44:46.244477       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1109 13:44:46.244739       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1109 13:44:46.245344       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1109 13:44:46.246729       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1109 13:44:46.246887       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1109 13:44:46.248505       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1109 13:44:46.252228       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1109 13:44:46.255601       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 13:44:46.258981       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	
	
	==> kube-controller-manager [f301eddabdc4748ec29a00b5d541088744c5ab8cc5777cea1ba92eb84008e179] <==
	I1109 13:46:55.929401       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1109 13:46:55.935001       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 13:46:55.939490       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1109 13:46:55.943866       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1109 13:46:55.944041       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1109 13:46:55.944130       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1109 13:46:55.944198       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1109 13:46:55.944698       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1109 13:46:55.944849       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1109 13:46:55.944937       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1109 13:46:55.945195       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-419649"
	I1109 13:46:55.945257       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1109 13:46:55.945718       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1109 13:46:55.948124       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1109 13:46:55.949990       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1109 13:46:55.956456       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1109 13:46:55.965862       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1109 13:51:41.099168       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:51:41.133658       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:51:41.142440       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:51:41.145364       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:51:41.165332       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:51:41.174762       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:51:41.175487       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1109 13:51:41.182280       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [a7ec1c8b227e612c946139a2114c20f15d630c80f99ada219f51c961793c721b] <==
	I1109 13:44:43.756193       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 13:44:43.857352       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 13:44:43.857411       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.90"]
	E1109 13:44:43.857528       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 13:44:44.033990       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1109 13:44:44.034236       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1109 13:44:44.034359       1 server_linux.go:132] "Using iptables Proxier"
	I1109 13:44:44.063937       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 13:44:44.064227       1 server.go:527] "Version info" version="v1.34.1"
	I1109 13:44:44.064263       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 13:44:44.066418       1 config.go:106] "Starting endpoint slice config controller"
	I1109 13:44:44.066493       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 13:44:44.069250       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 13:44:44.069283       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 13:44:44.075985       1 config.go:200] "Starting service config controller"
	I1109 13:44:44.076022       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 13:44:44.076752       1 config.go:309] "Starting node config controller"
	I1109 13:44:44.076862       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 13:44:44.076872       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 13:44:44.166673       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1109 13:44:44.170507       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 13:44:44.177107       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [455a05faf8a4dc9342caa1ccb9a3986f838c476148641b63fc61957a6a5f78d3] <==
	I1109 13:44:39.692053       1 serving.go:386] Generated self-signed cert in-memory
	W1109 13:44:41.712695       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 13:44:41.712723       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 13:44:41.712732       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 13:44:41.712738       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 13:44:41.813904       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 13:44:41.813983       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 13:44:41.819415       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:44:41.819568       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:44:41.820003       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 13:44:41.820117       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 13:44:41.921485       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:45:03.884393       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1109 13:45:03.884459       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1109 13:45:03.884484       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1109 13:45:03.884518       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:45:03.884632       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1109 13:45:03.884711       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [eeac0983c075eb11541843961479d16f1ab06804bc32ece9e1a68fd86e0050c7] <==
	I1109 13:46:50.361953       1 serving.go:386] Generated self-signed cert in-memory
	W1109 13:46:51.486251       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 13:46:51.486300       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 13:46:51.486312       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 13:46:51.486319       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 13:46:51.563170       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 13:46:51.565842       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 13:46:51.570632       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 13:46:51.573338       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:46:51.573391       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:46:51.573409       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 13:46:51.674422       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:01:30 functional-419649 kubelet[6431]: E1109 14:01:30.954450    6431 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\" already exists"
	Nov 09 14:01:30 functional-419649 kubelet[6431]: E1109 14:01:30.954541    6431 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\" already exists" pod="kube-system/kube-proxy-tw9jj"
	Nov 09 14:01:30 functional-419649 kubelet[6431]: E1109 14:01:30.954582    6431 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\" already exists" pod="kube-system/kube-proxy-tw9jj"
	Nov 09 14:01:30 functional-419649 kubelet[6431]: E1109 14:01:30.955753    6431 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-tw9jj_kube-system(64d037d4-fe85-43d4-8322-67e3cf4a7b89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-tw9jj_kube-system(64d037d4-fe85-43d4-8322-67e3cf4a7b89)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\\\" already exists\"" pod="kube-system/kube-proxy-tw9jj" podUID="64d037d4-fe85-43d4-8322-67e3cf4a7b89"
	Nov 09 14:01:37 functional-419649 kubelet[6431]: E1109 14:01:37.399154    6431 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762696897398737937  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203239}  inodes_used:{value:105}}"
	Nov 09 14:01:37 functional-419649 kubelet[6431]: E1109 14:01:37.399199    6431 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762696897398737937  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203239}  inodes_used:{value:105}}"
	Nov 09 14:01:42 functional-419649 kubelet[6431]: E1109 14:01:42.946686    6431 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\" already exists"
	Nov 09 14:01:42 functional-419649 kubelet[6431]: E1109 14:01:42.946746    6431 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\" already exists" pod="kube-system/kube-proxy-tw9jj"
	Nov 09 14:01:42 functional-419649 kubelet[6431]: E1109 14:01:42.946766    6431 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\" already exists" pod="kube-system/kube-proxy-tw9jj"
	Nov 09 14:01:42 functional-419649 kubelet[6431]: E1109 14:01:42.946868    6431 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-tw9jj_kube-system(64d037d4-fe85-43d4-8322-67e3cf4a7b89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-tw9jj_kube-system(64d037d4-fe85-43d4-8322-67e3cf4a7b89)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\\\" already exists\"" pod="kube-system/kube-proxy-tw9jj" podUID="64d037d4-fe85-43d4-8322-67e3cf4a7b89"
	Nov 09 14:01:44 functional-419649 kubelet[6431]: E1109 14:01:44.938775    6431 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-nhqbg" podUID="0d93ae6c-8c15-4992-8d45-0638b27bc438"
	Nov 09 14:01:47 functional-419649 kubelet[6431]: E1109 14:01:47.085541    6431 manager.go:1116] Failed to create existing container: /kubepods/burstable/podc02798d3a566bdf1b79c9a1609aa8851/crio-7d83e0a07a96bc1167687657e35bdf244ca76065332caccf2cbd9d4fe64ff29c: Error finding container 7d83e0a07a96bc1167687657e35bdf244ca76065332caccf2cbd9d4fe64ff29c: Status 404 returned error can't find the container with id 7d83e0a07a96bc1167687657e35bdf244ca76065332caccf2cbd9d4fe64ff29c
	Nov 09 14:01:47 functional-419649 kubelet[6431]: E1109 14:01:47.086331    6431 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod274cf2193394c035a9ce4fd611eef33b/crio-99b115d10596e4e2acea4b39e1c2773d555e5bcd31c5829f19e770b5d584ef64: Error finding container 99b115d10596e4e2acea4b39e1c2773d555e5bcd31c5829f19e770b5d584ef64: Status 404 returned error can't find the container with id 99b115d10596e4e2acea4b39e1c2773d555e5bcd31c5829f19e770b5d584ef64
	Nov 09 14:01:47 functional-419649 kubelet[6431]: E1109 14:01:47.086642    6431 manager.go:1116] Failed to create existing container: /kubepods/burstable/podcc2b7dd4-023d-4994-9237-fabeae6e63ce/crio-cbb25c67e227ae6d14f5d31b745da29ae24e0b42896047793ff6795b34dbc3bd: Error finding container cbb25c67e227ae6d14f5d31b745da29ae24e0b42896047793ff6795b34dbc3bd: Status 404 returned error can't find the container with id cbb25c67e227ae6d14f5d31b745da29ae24e0b42896047793ff6795b34dbc3bd
	Nov 09 14:01:47 functional-419649 kubelet[6431]: E1109 14:01:47.087176    6431 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod64d037d4-fe85-43d4-8322-67e3cf4a7b89/crio-209b0ec75c725d6d45ec9e42a7ba7a936de6699df431c66f39fcb3aec80a1ab1: Error finding container 209b0ec75c725d6d45ec9e42a7ba7a936de6699df431c66f39fcb3aec80a1ab1: Status 404 returned error can't find the container with id 209b0ec75c725d6d45ec9e42a7ba7a936de6699df431c66f39fcb3aec80a1ab1
	Nov 09 14:01:47 functional-419649 kubelet[6431]: E1109 14:01:47.087457    6431 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podae48a075-3b00-486c-b8b2-6b2080262987/crio-8cbe5cd0f7834ad7420d084807c5915acd62cf10d4ddf89d1d29edf6f0936966: Error finding container 8cbe5cd0f7834ad7420d084807c5915acd62cf10d4ddf89d1d29edf6f0936966: Status 404 returned error can't find the container with id 8cbe5cd0f7834ad7420d084807c5915acd62cf10d4ddf89d1d29edf6f0936966
	Nov 09 14:01:47 functional-419649 kubelet[6431]: E1109 14:01:47.087776    6431 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod62b30b4238b2d99ce79cd53f17bb6da4/crio-783c1d6ed75e1986bb80f86c0a3ef9c7b34fe784b7c558e9e71a8dae6e90a890: Error finding container 783c1d6ed75e1986bb80f86c0a3ef9c7b34fe784b7c558e9e71a8dae6e90a890: Status 404 returned error can't find the container with id 783c1d6ed75e1986bb80f86c0a3ef9c7b34fe784b7c558e9e71a8dae6e90a890
	Nov 09 14:01:47 functional-419649 kubelet[6431]: E1109 14:01:47.088087    6431 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod9011e98a-2a19-48e0-8e28-8bddfcffc50c/crio-8575db975cef01eb89fd28f03c6fa8fdca33842a2041979b632356575beb2d34: Error finding container 8575db975cef01eb89fd28f03c6fa8fdca33842a2041979b632356575beb2d34: Status 404 returned error can't find the container with id 8575db975cef01eb89fd28f03c6fa8fdca33842a2041979b632356575beb2d34
	Nov 09 14:01:47 functional-419649 kubelet[6431]: E1109 14:01:47.402433    6431 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762696907401518836  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203239}  inodes_used:{value:105}}"
	Nov 09 14:01:47 functional-419649 kubelet[6431]: E1109 14:01:47.402456    6431 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762696907401518836  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203239}  inodes_used:{value:105}}"
	Nov 09 14:01:55 functional-419649 kubelet[6431]: E1109 14:01:55.936648    6431 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-nhqbg" podUID="0d93ae6c-8c15-4992-8d45-0638b27bc438"
	Nov 09 14:01:56 functional-419649 kubelet[6431]: E1109 14:01:56.966230    6431 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\" already exists"
	Nov 09 14:01:56 functional-419649 kubelet[6431]: E1109 14:01:56.966317    6431 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\" already exists" pod="kube-system/kube-proxy-tw9jj"
	Nov 09 14:01:56 functional-419649 kubelet[6431]: E1109 14:01:56.966336    6431 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\" already exists" pod="kube-system/kube-proxy-tw9jj"
	Nov 09 14:01:56 functional-419649 kubelet[6431]: E1109 14:01:56.966422    6431 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-tw9jj_kube-system(64d037d4-fe85-43d4-8322-67e3cf4a7b89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-tw9jj_kube-system(64d037d4-fe85-43d4-8322-67e3cf4a7b89)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-proxy-tw9jj_kube-system_64d037d4-fe85-43d4-8322-67e3cf4a7b89_2\\\" already exists\"" pod="kube-system/kube-proxy-tw9jj" podUID="64d037d4-fe85-43d4-8322-67e3cf4a7b89"
	
	
	==> storage-provisioner [4ccfd82eb8e555e10a2eafc36a68aa554bade3da97f156fb8739c88f2a38cf0e] <==
	W1109 14:01:32.657294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:01:34.661740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:01:34.678281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:01:36.682357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:01:36.689299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:01:38.696520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:01:38.707710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:01:40.712445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:01:40.720371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:01:42.724367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:01:42.731445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:01:44.736085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:01:44.749241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:01:46.753414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:01:46.764412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:01:48.769991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:01:48.777189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:01:50.783600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:01:50.798486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:01:52.802982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:01:52.809964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:01:54.818106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:01:54.830999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:01:56.835656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:01:56.848533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [6678989530f542c497fe013e487fb90f58d077213ceab56369a11083ac93d8aa] <==
	I1109 13:44:43.443213       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 13:44:43.494214       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 13:44:43.509099       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1109 13:44:43.550008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:44:47.009647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:44:51.271035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:44:54.874686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:44:57.929867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:45:00.953468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:45:00.962351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 13:45:00.962538       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 13:45:00.962713       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"edabc5ba-9ba5-4f59-828d-21dd30bf1c29", APIVersion:"v1", ResourceVersion:"533", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-419649_f071ba0b-f946-440f-ad3d-fe78e643a2d5 became leader
	I1109 13:45:00.962870       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-419649_f071ba0b-f946-440f-ad3d-fe78e643a2d5!
	W1109 13:45:00.967758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:45:00.981678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 13:45:01.063910       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-419649_f071ba0b-f946-440f-ad3d-fe78e643a2d5!
	W1109 13:45:02.986290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:45:02.993756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-419649 -n functional-419649
helpers_test.go:269: (dbg) Run:  kubectl --context functional-419649 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-vgzbw mysql-5bb876957f-2vzsj sp-pod dashboard-metrics-scraper-77bf4d6c4c-nhqbg kubernetes-dashboard-855c9754f9-9z7jc
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-419649 describe pod busybox-mount hello-node-75c85bcc94-vgzbw mysql-5bb876957f-2vzsj sp-pod dashboard-metrics-scraper-77bf4d6c4c-nhqbg kubernetes-dashboard-855c9754f9-9z7jc
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-419649 describe pod busybox-mount hello-node-75c85bcc94-vgzbw mysql-5bb876957f-2vzsj sp-pod dashboard-metrics-scraper-77bf4d6c4c-nhqbg kubernetes-dashboard-855c9754f9-9z7jc: exit status 1 (130.615181ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-419649/192.168.39.90
	Start Time:       Sun, 09 Nov 2025 13:51:16 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.14
	IPs:
	  IP:  10.244.0.14
	Containers:
	  mount-munger:
	    Container ID:  cri-o://a4bfeb1eaf4e15a5f6e37e7754b859374bfe1b44c20eebb4ebbf05b097e18c3c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 09 Nov 2025 13:51:49 +0000
	      Finished:     Sun, 09 Nov 2025 13:51:49 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6p5g6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-6p5g6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-419649
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.565s (31.594s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-vgzbw
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-419649/192.168.39.90
	Start Time:       Sun, 09 Nov 2025 13:51:07 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4fzhz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4fzhz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  10m                default-scheduler  Successfully assigned default/hello-node-75c85bcc94-vgzbw to functional-419649
	  Normal   Pulled     10m                kubelet            Successfully pulled image "kicbase/echo-server" in 919ms (919ms including waiting). Image size: 4945246 bytes.
	  Warning  Failed     3m32s              kubelet            Error: container create failed: time="2025-11-09T13:51:08Z" level=error msg="runc create failed: unable to start container process: error during container init: exec: \"/bin/echo-server\": stat /bin/echo-server: no such file or directory"
	  Warning  Failed     83s                kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     83s                kubelet            Error: ErrImagePull
	  Normal   BackOff    82s                kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     82s                kubelet            Error: ImagePullBackOff
	  Normal   Pulling    69s (x3 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-2vzsj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-419649/192.168.39.90
	Start Time:       Sun, 09 Nov 2025 13:51:54 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.17
	IPs:
	  IP:           10.244.0.17
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hfrjp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hfrjp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-2vzsj to functional-419649
	  Warning  Failed     5m38s                  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m8s (x2 over 8m24s)   kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m8s (x3 over 8m24s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    2m39s (x4 over 8m24s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m39s (x4 over 8m24s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m24s (x4 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-419649/192.168.39.90
	Start Time:       Sun, 09 Nov 2025 13:51:15 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:  10.244.0.13
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9sr2c (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-9sr2c:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/sp-pod to functional-419649
	  Warning  Failed     7m24s                kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m38s (x3 over 10m)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m38s (x4 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    81s (x10 over 10m)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     81s (x10 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    67s (x5 over 10m)    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-nhqbg" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-9z7jc" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-419649 describe pod busybox-mount hello-node-75c85bcc94-vgzbw mysql-5bb876957f-2vzsj sp-pod dashboard-metrics-scraper-77bf4d6c4c-nhqbg kubernetes-dashboard-855c9754f9-9z7jc: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (603.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-419649 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-419649 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-vgzbw" [3b85306c-2aa1-4f2a-9f4f-b08d7fe54720] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-419649 -n functional-419649
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-09 14:01:07.423901272 +0000 UTC m=+1938.146375657
functional_test.go:1460: (dbg) Run:  kubectl --context functional-419649 describe po hello-node-75c85bcc94-vgzbw -n default
functional_test.go:1460: (dbg) kubectl --context functional-419649 describe po hello-node-75c85bcc94-vgzbw -n default:
Name:             hello-node-75c85bcc94-vgzbw
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-419649/192.168.39.90
Start Time:       Sun, 09 Nov 2025 13:51:07 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4fzhz (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-4fzhz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  10m                default-scheduler  Successfully assigned default/hello-node-75c85bcc94-vgzbw to functional-419649
Normal   Pulled     9m59s              kubelet            Successfully pulled image "kicbase/echo-server" in 919ms (919ms including waiting). Image size: 4945246 bytes.
Warning  Failed     2m42s              kubelet            Error: container create failed: time="2025-11-09T13:51:08Z" level=error msg="runc create failed: unable to start container process: error during container init: exec: \"/bin/echo-server\": stat /bin/echo-server: no such file or directory"
Warning  Failed     33s                kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     33s                kubelet            Error: ErrImagePull
Normal   BackOff    32s                kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     32s                kubelet            Error: ImagePullBackOff
Normal   Pulling    19s (x3 over 10m)  kubelet            Pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-419649 logs hello-node-75c85bcc94-vgzbw -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-419649 logs hello-node-75c85bcc94-vgzbw -n default: exit status 1 (85.129381ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-vgzbw" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-419649 logs hello-node-75c85bcc94-vgzbw -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419649 service --namespace=default --https --url hello-node: exit status 115 (295.300929ms)

                                                
                                                
-- stdout --
	https://192.168.39.90:31476
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-419649 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419649 service hello-node --url --format={{.IP}}: exit status 115 (322.026314ms)

                                                
                                                
-- stdout --
	192.168.39.90
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-419649 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419649 service hello-node --url: exit status 115 (328.754259ms)

                                                
                                                
-- stdout --
	http://192.168.39.90:31476
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-419649 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.90:31476
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestPreload (162.14s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-148043 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-148043 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m39.106281095s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-148043 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-148043 image pull gcr.io/k8s-minikube/busybox: (2.708975418s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-148043
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-148043: (7.197910023s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-148043 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-148043 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (49.87068779s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-148043 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-11-09 14:45:00.571490793 +0000 UTC m=+4571.293965197
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-148043 -n test-preload-148043
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-148043 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-148043 logs -n 25: (1.363698979s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-570915 ssh -n multinode-570915-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-570915     │ jenkins │ v1.37.0 │ 09 Nov 25 14:31 UTC │ 09 Nov 25 14:31 UTC │
	│ ssh     │ multinode-570915 ssh -n multinode-570915 sudo cat /home/docker/cp-test_multinode-570915-m03_multinode-570915.txt                                          │ multinode-570915     │ jenkins │ v1.37.0 │ 09 Nov 25 14:31 UTC │ 09 Nov 25 14:31 UTC │
	│ cp      │ multinode-570915 cp multinode-570915-m03:/home/docker/cp-test.txt multinode-570915-m02:/home/docker/cp-test_multinode-570915-m03_multinode-570915-m02.txt │ multinode-570915     │ jenkins │ v1.37.0 │ 09 Nov 25 14:31 UTC │ 09 Nov 25 14:31 UTC │
	│ ssh     │ multinode-570915 ssh -n multinode-570915-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-570915     │ jenkins │ v1.37.0 │ 09 Nov 25 14:31 UTC │ 09 Nov 25 14:31 UTC │
	│ ssh     │ multinode-570915 ssh -n multinode-570915-m02 sudo cat /home/docker/cp-test_multinode-570915-m03_multinode-570915-m02.txt                                  │ multinode-570915     │ jenkins │ v1.37.0 │ 09 Nov 25 14:31 UTC │ 09 Nov 25 14:31 UTC │
	│ node    │ multinode-570915 node stop m03                                                                                                                            │ multinode-570915     │ jenkins │ v1.37.0 │ 09 Nov 25 14:31 UTC │ 09 Nov 25 14:31 UTC │
	│ node    │ multinode-570915 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-570915     │ jenkins │ v1.37.0 │ 09 Nov 25 14:31 UTC │ 09 Nov 25 14:32 UTC │
	│ node    │ list -p multinode-570915                                                                                                                                  │ multinode-570915     │ jenkins │ v1.37.0 │ 09 Nov 25 14:32 UTC │                     │
	│ stop    │ -p multinode-570915                                                                                                                                       │ multinode-570915     │ jenkins │ v1.37.0 │ 09 Nov 25 14:32 UTC │ 09 Nov 25 14:35 UTC │
	│ start   │ -p multinode-570915 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-570915     │ jenkins │ v1.37.0 │ 09 Nov 25 14:35 UTC │ 09 Nov 25 14:37 UTC │
	│ node    │ list -p multinode-570915                                                                                                                                  │ multinode-570915     │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │                     │
	│ node    │ multinode-570915 node delete m03                                                                                                                          │ multinode-570915     │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ stop    │ multinode-570915 stop                                                                                                                                     │ multinode-570915     │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:39 UTC │
	│ start   │ -p multinode-570915 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-570915     │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:41 UTC │
	│ node    │ list -p multinode-570915                                                                                                                                  │ multinode-570915     │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │                     │
	│ start   │ -p multinode-570915-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-570915-m02 │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │                     │
	│ start   │ -p multinode-570915-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-570915-m03 │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:42 UTC │
	│ node    │ add -p multinode-570915                                                                                                                                   │ multinode-570915     │ jenkins │ v1.37.0 │ 09 Nov 25 14:42 UTC │                     │
	│ delete  │ -p multinode-570915-m03                                                                                                                                   │ multinode-570915-m03 │ jenkins │ v1.37.0 │ 09 Nov 25 14:42 UTC │ 09 Nov 25 14:42 UTC │
	│ delete  │ -p multinode-570915                                                                                                                                       │ multinode-570915     │ jenkins │ v1.37.0 │ 09 Nov 25 14:42 UTC │ 09 Nov 25 14:42 UTC │
	│ start   │ -p test-preload-148043 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-148043  │ jenkins │ v1.37.0 │ 09 Nov 25 14:42 UTC │ 09 Nov 25 14:44 UTC │
	│ image   │ test-preload-148043 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-148043  │ jenkins │ v1.37.0 │ 09 Nov 25 14:44 UTC │ 09 Nov 25 14:44 UTC │
	│ stop    │ -p test-preload-148043                                                                                                                                    │ test-preload-148043  │ jenkins │ v1.37.0 │ 09 Nov 25 14:44 UTC │ 09 Nov 25 14:44 UTC │
	│ start   │ -p test-preload-148043 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-148043  │ jenkins │ v1.37.0 │ 09 Nov 25 14:44 UTC │ 09 Nov 25 14:45 UTC │
	│ image   │ test-preload-148043 image list                                                                                                                            │ test-preload-148043  │ jenkins │ v1.37.0 │ 09 Nov 25 14:45 UTC │ 09 Nov 25 14:45 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:44:10
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:44:10.552070  581234 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:44:10.552380  581234 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:44:10.552391  581234 out.go:374] Setting ErrFile to fd 2...
	I1109 14:44:10.552396  581234 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:44:10.552625  581234 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
	I1109 14:44:10.553222  581234 out.go:368] Setting JSON to false
	I1109 14:44:10.554367  581234 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":73600,"bootTime":1762625851,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 14:44:10.554514  581234 start.go:143] virtualization: kvm guest
	I1109 14:44:10.557027  581234 out.go:179] * [test-preload-148043] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 14:44:10.559010  581234 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:44:10.559021  581234 notify.go:221] Checking for updates...
	I1109 14:44:10.562322  581234 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:44:10.564033  581234 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-549598/kubeconfig
	I1109 14:44:10.565745  581234 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 14:44:10.567194  581234 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 14:44:10.568683  581234 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:44:10.571042  581234 config.go:182] Loaded profile config "test-preload-148043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1109 14:44:10.573301  581234 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1109 14:44:10.574663  581234 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:44:10.617422  581234 out.go:179] * Using the kvm2 driver based on existing profile
	I1109 14:44:10.618840  581234 start.go:309] selected driver: kvm2
	I1109 14:44:10.618870  581234 start.go:930] validating driver "kvm2" against &{Name:test-preload-148043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-148043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:44:10.619016  581234 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:44:10.620263  581234 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:44:10.620327  581234 cni.go:84] Creating CNI manager for ""
	I1109 14:44:10.620375  581234 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1109 14:44:10.620420  581234 start.go:353] cluster config:
	{Name:test-preload-148043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-148043 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:44:10.620532  581234 iso.go:125] acquiring lock: {Name:mk8f0547b4600c0b2d1e831f024251df19b55199 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:44:10.622811  581234 out.go:179] * Starting "test-preload-148043" primary control-plane node in "test-preload-148043" cluster
	I1109 14:44:10.624251  581234 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1109 14:44:10.645717  581234 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1109 14:44:10.645788  581234 cache.go:65] Caching tarball of preloaded images
	I1109 14:44:10.646178  581234 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1109 14:44:10.648620  581234 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1109 14:44:10.650348  581234 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1109 14:44:10.679185  581234 preload.go:295] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1109 14:44:10.679250  581234 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21139-549598/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1109 14:44:13.167756  581234 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1109 14:44:13.167959  581234 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/test-preload-148043/config.json ...
	I1109 14:44:13.168206  581234 start.go:360] acquireMachinesLock for test-preload-148043: {Name:mkb2a0b4add9a99b18ce9ab72b74eb5b4fda0e0a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1109 14:44:13.168284  581234 start.go:364] duration metric: took 53.691µs to acquireMachinesLock for "test-preload-148043"
	I1109 14:44:13.168314  581234 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:44:13.168319  581234 fix.go:54] fixHost starting: 
	I1109 14:44:13.170506  581234 fix.go:112] recreateIfNeeded on test-preload-148043: state=Stopped err=<nil>
	W1109 14:44:13.170544  581234 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:44:13.172534  581234 out.go:252] * Restarting existing kvm2 VM for "test-preload-148043" ...
	I1109 14:44:13.172677  581234 main.go:143] libmachine: starting domain...
	I1109 14:44:13.172697  581234 main.go:143] libmachine: ensuring networks are active...
	I1109 14:44:13.173940  581234 main.go:143] libmachine: Ensuring network default is active
	I1109 14:44:13.174464  581234 main.go:143] libmachine: Ensuring network mk-test-preload-148043 is active
	I1109 14:44:13.175011  581234 main.go:143] libmachine: getting domain XML...
	I1109 14:44:13.176488  581234 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-148043</name>
	  <uuid>850ee72b-e0cc-466a-ba54-a1663bc970a1</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21139-549598/.minikube/machines/test-preload-148043/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21139-549598/.minikube/machines/test-preload-148043/test-preload-148043.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:85:af:21'/>
	      <source network='mk-test-preload-148043'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:39:ae:50'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1109 14:44:14.560875  581234 main.go:143] libmachine: waiting for domain to start...
	I1109 14:44:14.562541  581234 main.go:143] libmachine: domain is now running
	I1109 14:44:14.562573  581234 main.go:143] libmachine: waiting for IP...
	I1109 14:44:14.563466  581234 main.go:143] libmachine: domain test-preload-148043 has defined MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:14.564221  581234 main.go:143] libmachine: domain test-preload-148043 has current primary IP address 192.168.39.71 and MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:14.564239  581234 main.go:143] libmachine: found domain IP: 192.168.39.71
	I1109 14:44:14.564260  581234 main.go:143] libmachine: reserving static IP address...
	I1109 14:44:14.564788  581234 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-148043", mac: "52:54:00:85:af:21", ip: "192.168.39.71"} in network mk-test-preload-148043: {Iface:virbr1 ExpiryTime:2025-11-09 15:42:39 +0000 UTC Type:0 Mac:52:54:00:85:af:21 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-148043 Clientid:01:52:54:00:85:af:21}
	I1109 14:44:14.564848  581234 main.go:143] libmachine: skip adding static IP to network mk-test-preload-148043 - found existing host DHCP lease matching {name: "test-preload-148043", mac: "52:54:00:85:af:21", ip: "192.168.39.71"}
	I1109 14:44:14.564870  581234 main.go:143] libmachine: reserved static IP address 192.168.39.71 for domain test-preload-148043
	I1109 14:44:14.564878  581234 main.go:143] libmachine: waiting for SSH...
	I1109 14:44:14.564887  581234 main.go:143] libmachine: Getting to WaitForSSH function...
	I1109 14:44:14.568113  581234 main.go:143] libmachine: domain test-preload-148043 has defined MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:14.568745  581234 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:85:af:21", ip: ""} in network mk-test-preload-148043: {Iface:virbr1 ExpiryTime:2025-11-09 15:42:39 +0000 UTC Type:0 Mac:52:54:00:85:af:21 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-148043 Clientid:01:52:54:00:85:af:21}
	I1109 14:44:14.568778  581234 main.go:143] libmachine: domain test-preload-148043 has defined IP address 192.168.39.71 and MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:14.569020  581234 main.go:143] libmachine: Using SSH client type: native
	I1109 14:44:14.569320  581234 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1109 14:44:14.569336  581234 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1109 14:44:17.656088  581234 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.71:22: connect: no route to host
	I1109 14:44:23.736340  581234 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.71:22: connect: no route to host
	I1109 14:44:27.768786  581234 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.71:22: connect: connection refused
	I1109 14:44:30.878291  581234 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:44:30.882595  581234 main.go:143] libmachine: domain test-preload-148043 has defined MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:30.883152  581234 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:85:af:21", ip: ""} in network mk-test-preload-148043: {Iface:virbr1 ExpiryTime:2025-11-09 15:44:27 +0000 UTC Type:0 Mac:52:54:00:85:af:21 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-148043 Clientid:01:52:54:00:85:af:21}
	I1109 14:44:30.883180  581234 main.go:143] libmachine: domain test-preload-148043 has defined IP address 192.168.39.71 and MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:30.883471  581234 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/test-preload-148043/config.json ...
	I1109 14:44:30.883703  581234 machine.go:94] provisionDockerMachine start ...
	I1109 14:44:30.886500  581234 main.go:143] libmachine: domain test-preload-148043 has defined MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:30.886982  581234 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:85:af:21", ip: ""} in network mk-test-preload-148043: {Iface:virbr1 ExpiryTime:2025-11-09 15:44:27 +0000 UTC Type:0 Mac:52:54:00:85:af:21 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-148043 Clientid:01:52:54:00:85:af:21}
	I1109 14:44:30.887009  581234 main.go:143] libmachine: domain test-preload-148043 has defined IP address 192.168.39.71 and MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:30.887214  581234 main.go:143] libmachine: Using SSH client type: native
	I1109 14:44:30.887443  581234 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1109 14:44:30.887453  581234 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:44:30.998748  581234 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1109 14:44:30.998843  581234 buildroot.go:166] provisioning hostname "test-preload-148043"
	I1109 14:44:31.002555  581234 main.go:143] libmachine: domain test-preload-148043 has defined MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:31.003130  581234 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:85:af:21", ip: ""} in network mk-test-preload-148043: {Iface:virbr1 ExpiryTime:2025-11-09 15:44:27 +0000 UTC Type:0 Mac:52:54:00:85:af:21 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-148043 Clientid:01:52:54:00:85:af:21}
	I1109 14:44:31.003163  581234 main.go:143] libmachine: domain test-preload-148043 has defined IP address 192.168.39.71 and MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:31.003404  581234 main.go:143] libmachine: Using SSH client type: native
	I1109 14:44:31.003652  581234 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1109 14:44:31.003668  581234 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-148043 && echo "test-preload-148043" | sudo tee /etc/hostname
	I1109 14:44:31.136485  581234 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-148043
	
	I1109 14:44:31.139455  581234 main.go:143] libmachine: domain test-preload-148043 has defined MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:31.140080  581234 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:85:af:21", ip: ""} in network mk-test-preload-148043: {Iface:virbr1 ExpiryTime:2025-11-09 15:44:27 +0000 UTC Type:0 Mac:52:54:00:85:af:21 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-148043 Clientid:01:52:54:00:85:af:21}
	I1109 14:44:31.140118  581234 main.go:143] libmachine: domain test-preload-148043 has defined IP address 192.168.39.71 and MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:31.140341  581234 main.go:143] libmachine: Using SSH client type: native
	I1109 14:44:31.140570  581234 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1109 14:44:31.140597  581234 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-148043' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-148043/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-148043' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:44:31.267422  581234 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:44:31.267463  581234 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21139-549598/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-549598/.minikube}
	I1109 14:44:31.267528  581234 buildroot.go:174] setting up certificates
	I1109 14:44:31.267549  581234 provision.go:84] configureAuth start
	I1109 14:44:31.270849  581234 main.go:143] libmachine: domain test-preload-148043 has defined MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:31.271400  581234 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:85:af:21", ip: ""} in network mk-test-preload-148043: {Iface:virbr1 ExpiryTime:2025-11-09 15:44:27 +0000 UTC Type:0 Mac:52:54:00:85:af:21 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-148043 Clientid:01:52:54:00:85:af:21}
	I1109 14:44:31.271436  581234 main.go:143] libmachine: domain test-preload-148043 has defined IP address 192.168.39.71 and MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:31.275073  581234 main.go:143] libmachine: domain test-preload-148043 has defined MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:31.275633  581234 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:85:af:21", ip: ""} in network mk-test-preload-148043: {Iface:virbr1 ExpiryTime:2025-11-09 15:44:27 +0000 UTC Type:0 Mac:52:54:00:85:af:21 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-148043 Clientid:01:52:54:00:85:af:21}
	I1109 14:44:31.275665  581234 main.go:143] libmachine: domain test-preload-148043 has defined IP address 192.168.39.71 and MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:31.275862  581234 provision.go:143] copyHostCerts
	I1109 14:44:31.275953  581234 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-549598/.minikube/ca.pem, removing ...
	I1109 14:44:31.275978  581234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-549598/.minikube/ca.pem
	I1109 14:44:31.276074  581234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-549598/.minikube/ca.pem (1082 bytes)
	I1109 14:44:31.276214  581234 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-549598/.minikube/cert.pem, removing ...
	I1109 14:44:31.276226  581234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-549598/.minikube/cert.pem
	I1109 14:44:31.276283  581234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-549598/.minikube/cert.pem (1123 bytes)
	I1109 14:44:31.276442  581234 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-549598/.minikube/key.pem, removing ...
	I1109 14:44:31.276459  581234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-549598/.minikube/key.pem
	I1109 14:44:31.276504  581234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-549598/.minikube/key.pem (1679 bytes)
	I1109 14:44:31.276601  581234 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-549598/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca-key.pem org=jenkins.test-preload-148043 san=[127.0.0.1 192.168.39.71 localhost minikube test-preload-148043]
	I1109 14:44:31.741529  581234 provision.go:177] copyRemoteCerts
	I1109 14:44:31.741618  581234 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:44:31.744701  581234 main.go:143] libmachine: domain test-preload-148043 has defined MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:31.745186  581234 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:85:af:21", ip: ""} in network mk-test-preload-148043: {Iface:virbr1 ExpiryTime:2025-11-09 15:44:27 +0000 UTC Type:0 Mac:52:54:00:85:af:21 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-148043 Clientid:01:52:54:00:85:af:21}
	I1109 14:44:31.745224  581234 main.go:143] libmachine: domain test-preload-148043 has defined IP address 192.168.39.71 and MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:31.745386  581234 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/test-preload-148043/id_rsa Username:docker}
	I1109 14:44:31.834378  581234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:44:31.874845  581234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1109 14:44:31.915117  581234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:44:31.957638  581234 provision.go:87] duration metric: took 690.070133ms to configureAuth
	I1109 14:44:31.957746  581234 buildroot.go:189] setting minikube options for container-runtime
	I1109 14:44:31.957961  581234 config.go:182] Loaded profile config "test-preload-148043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1109 14:44:31.961954  581234 main.go:143] libmachine: domain test-preload-148043 has defined MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:31.962597  581234 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:85:af:21", ip: ""} in network mk-test-preload-148043: {Iface:virbr1 ExpiryTime:2025-11-09 15:44:27 +0000 UTC Type:0 Mac:52:54:00:85:af:21 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-148043 Clientid:01:52:54:00:85:af:21}
	I1109 14:44:31.962630  581234 main.go:143] libmachine: domain test-preload-148043 has defined IP address 192.168.39.71 and MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:31.962920  581234 main.go:143] libmachine: Using SSH client type: native
	I1109 14:44:31.963182  581234 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1109 14:44:31.963207  581234 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:44:32.256478  581234 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:44:32.256505  581234 machine.go:97] duration metric: took 1.37278808s to provisionDockerMachine
	I1109 14:44:32.256519  581234 start.go:293] postStartSetup for "test-preload-148043" (driver="kvm2")
	I1109 14:44:32.256531  581234 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:44:32.256610  581234 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:44:32.260352  581234 main.go:143] libmachine: domain test-preload-148043 has defined MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:32.261093  581234 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:85:af:21", ip: ""} in network mk-test-preload-148043: {Iface:virbr1 ExpiryTime:2025-11-09 15:44:27 +0000 UTC Type:0 Mac:52:54:00:85:af:21 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-148043 Clientid:01:52:54:00:85:af:21}
	I1109 14:44:32.261130  581234 main.go:143] libmachine: domain test-preload-148043 has defined IP address 192.168.39.71 and MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:32.261350  581234 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/test-preload-148043/id_rsa Username:docker}
	I1109 14:44:32.350691  581234 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:44:32.357142  581234 info.go:137] Remote host: Buildroot 2025.02
	I1109 14:44:32.357178  581234 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-549598/.minikube/addons for local assets ...
	I1109 14:44:32.357264  581234 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-549598/.minikube/files for local assets ...
	I1109 14:44:32.357357  581234 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem -> 5534732.pem in /etc/ssl/certs
	I1109 14:44:32.357463  581234 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:44:32.373376  581234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem --> /etc/ssl/certs/5534732.pem (1708 bytes)
	I1109 14:44:32.411670  581234 start.go:296] duration metric: took 155.134237ms for postStartSetup
	I1109 14:44:32.411725  581234 fix.go:56] duration metric: took 19.243404738s for fixHost
	I1109 14:44:32.415463  581234 main.go:143] libmachine: domain test-preload-148043 has defined MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:32.416044  581234 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:85:af:21", ip: ""} in network mk-test-preload-148043: {Iface:virbr1 ExpiryTime:2025-11-09 15:44:27 +0000 UTC Type:0 Mac:52:54:00:85:af:21 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-148043 Clientid:01:52:54:00:85:af:21}
	I1109 14:44:32.416082  581234 main.go:143] libmachine: domain test-preload-148043 has defined IP address 192.168.39.71 and MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:32.416395  581234 main.go:143] libmachine: Using SSH client type: native
	I1109 14:44:32.416669  581234 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1109 14:44:32.416685  581234 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1109 14:44:32.527118  581234 main.go:143] libmachine: SSH cmd err, output: <nil>: 1762699472.472497630
	
	I1109 14:44:32.527154  581234 fix.go:216] guest clock: 1762699472.472497630
	I1109 14:44:32.527167  581234 fix.go:229] Guest: 2025-11-09 14:44:32.47249763 +0000 UTC Remote: 2025-11-09 14:44:32.411731311 +0000 UTC m=+21.917556162 (delta=60.766319ms)
	I1109 14:44:32.527193  581234 fix.go:200] guest clock delta is within tolerance: 60.766319ms
	I1109 14:44:32.527209  581234 start.go:83] releasing machines lock for "test-preload-148043", held for 19.358901536s
	I1109 14:44:32.530403  581234 main.go:143] libmachine: domain test-preload-148043 has defined MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:32.531059  581234 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:85:af:21", ip: ""} in network mk-test-preload-148043: {Iface:virbr1 ExpiryTime:2025-11-09 15:44:27 +0000 UTC Type:0 Mac:52:54:00:85:af:21 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-148043 Clientid:01:52:54:00:85:af:21}
	I1109 14:44:32.531104  581234 main.go:143] libmachine: domain test-preload-148043 has defined IP address 192.168.39.71 and MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:32.532144  581234 ssh_runner.go:195] Run: cat /version.json
	I1109 14:44:32.532289  581234 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:44:32.535902  581234 main.go:143] libmachine: domain test-preload-148043 has defined MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:32.536255  581234 main.go:143] libmachine: domain test-preload-148043 has defined MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:32.536403  581234 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:85:af:21", ip: ""} in network mk-test-preload-148043: {Iface:virbr1 ExpiryTime:2025-11-09 15:44:27 +0000 UTC Type:0 Mac:52:54:00:85:af:21 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-148043 Clientid:01:52:54:00:85:af:21}
	I1109 14:44:32.536446  581234 main.go:143] libmachine: domain test-preload-148043 has defined IP address 192.168.39.71 and MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:32.536741  581234 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/test-preload-148043/id_rsa Username:docker}
	I1109 14:44:32.536749  581234 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:85:af:21", ip: ""} in network mk-test-preload-148043: {Iface:virbr1 ExpiryTime:2025-11-09 15:44:27 +0000 UTC Type:0 Mac:52:54:00:85:af:21 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-148043 Clientid:01:52:54:00:85:af:21}
	I1109 14:44:32.536843  581234 main.go:143] libmachine: domain test-preload-148043 has defined IP address 192.168.39.71 and MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:32.537054  581234 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/test-preload-148043/id_rsa Username:docker}
	I1109 14:44:32.623536  581234 ssh_runner.go:195] Run: systemctl --version
	I1109 14:44:32.665256  581234 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:44:32.822457  581234 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:44:32.831605  581234 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:44:32.831685  581234 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:44:32.857765  581234 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1109 14:44:32.857816  581234 start.go:496] detecting cgroup driver to use...
	I1109 14:44:32.857896  581234 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:44:32.884694  581234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:44:32.906065  581234 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:44:32.906128  581234 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:44:32.927627  581234 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:44:32.949661  581234 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:44:33.117383  581234 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:44:33.352434  581234 docker.go:234] disabling docker service ...
	I1109 14:44:33.352516  581234 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:44:33.376438  581234 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:44:33.397858  581234 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:44:33.599389  581234 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:44:33.771380  581234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:44:33.792082  581234 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:44:33.825663  581234 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1109 14:44:33.825892  581234 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:44:33.842869  581234 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:44:33.842958  581234 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:44:33.859815  581234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:44:33.876680  581234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:44:33.893879  581234 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:44:33.912178  581234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:44:33.929003  581234 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:44:33.954629  581234 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:44:33.970732  581234 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:44:33.984050  581234 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1109 14:44:33.984149  581234 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1109 14:44:34.009068  581234 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:44:34.023985  581234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:44:34.184478  581234 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:44:34.311848  581234 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:44:34.311983  581234 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:44:34.319538  581234 start.go:564] Will wait 60s for crictl version
	I1109 14:44:34.319625  581234 ssh_runner.go:195] Run: which crictl
	I1109 14:44:34.325927  581234 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1109 14:44:34.378600  581234 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1109 14:44:34.378718  581234 ssh_runner.go:195] Run: crio --version
	I1109 14:44:34.415027  581234 ssh_runner.go:195] Run: crio --version
	I1109 14:44:34.456778  581234 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1109 14:44:34.461867  581234 main.go:143] libmachine: domain test-preload-148043 has defined MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:34.462487  581234 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:85:af:21", ip: ""} in network mk-test-preload-148043: {Iface:virbr1 ExpiryTime:2025-11-09 15:44:27 +0000 UTC Type:0 Mac:52:54:00:85:af:21 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-148043 Clientid:01:52:54:00:85:af:21}
	I1109 14:44:34.462521  581234 main.go:143] libmachine: domain test-preload-148043 has defined IP address 192.168.39.71 and MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:34.462788  581234 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1109 14:44:34.468553  581234 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:44:34.488116  581234 kubeadm.go:884] updating cluster {Name:test-preload-148043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-148043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:44:34.488235  581234 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1109 14:44:34.488282  581234 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:44:34.533697  581234 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1109 14:44:34.533776  581234 ssh_runner.go:195] Run: which lz4
	I1109 14:44:34.538884  581234 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1109 14:44:34.544789  581234 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1109 14:44:34.544864  581234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1109 14:44:36.596131  581234 crio.go:462] duration metric: took 2.057323336s to copy over tarball
	I1109 14:44:36.596276  581234 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1109 14:44:38.653306  581234 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.056981089s)
	I1109 14:44:38.653349  581234 crio.go:469] duration metric: took 2.057172226s to extract the tarball
	I1109 14:44:38.653359  581234 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1109 14:44:38.698168  581234 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:44:38.756587  581234 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:44:38.756628  581234 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:44:38.756643  581234 kubeadm.go:935] updating node { 192.168.39.71 8443 v1.32.0 crio true true} ...
	I1109 14:44:38.756752  581234 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-148043 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-148043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:44:38.756846  581234 ssh_runner.go:195] Run: crio config
	I1109 14:44:38.817553  581234 cni.go:84] Creating CNI manager for ""
	I1109 14:44:38.817589  581234 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1109 14:44:38.817610  581234 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:44:38.817635  581234 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.71 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-148043 NodeName:test-preload-148043 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.71"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.71 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:44:38.817773  581234 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.71
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-148043"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.71"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.71"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:44:38.817869  581234 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1109 14:44:38.835363  581234 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:44:38.835455  581234 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:44:38.850997  581234 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1109 14:44:38.880001  581234 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:44:38.908298  581234 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1109 14:44:38.939848  581234 ssh_runner.go:195] Run: grep 192.168.39.71	control-plane.minikube.internal$ /etc/hosts
	I1109 14:44:38.946617  581234 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.71	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:44:38.969038  581234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:44:39.138078  581234 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:44:39.178662  581234 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/test-preload-148043 for IP: 192.168.39.71
	I1109 14:44:39.178690  581234 certs.go:195] generating shared ca certs ...
	I1109 14:44:39.178708  581234 certs.go:227] acquiring lock for ca certs: {Name:mkc766226c1ec8ac0cc61519ae61374bb0aa3b29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:44:39.178994  581234 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key
	I1109 14:44:39.179048  581234 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key
	I1109 14:44:39.179060  581234 certs.go:257] generating profile certs ...
	I1109 14:44:39.179150  581234 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/test-preload-148043/client.key
	I1109 14:44:39.179212  581234 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/test-preload-148043/apiserver.key.c7030339
	I1109 14:44:39.179249  581234 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/test-preload-148043/proxy-client.key
	I1109 14:44:39.179369  581234 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/553473.pem (1338 bytes)
	W1109 14:44:39.179417  581234 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-549598/.minikube/certs/553473_empty.pem, impossibly tiny 0 bytes
	I1109 14:44:39.179433  581234 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca-key.pem (1675 bytes)
	I1109 14:44:39.179479  581234 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:44:39.179522  581234 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:44:39.179555  581234 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/key.pem (1679 bytes)
	I1109 14:44:39.179607  581234 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem (1708 bytes)
	I1109 14:44:39.180230  581234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:44:39.231169  581234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:44:39.284482  581234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:44:39.324771  581234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:44:39.367033  581234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/test-preload-148043/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1109 14:44:39.405574  581234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/test-preload-148043/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:44:39.443930  581234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/test-preload-148043/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:44:39.482926  581234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/test-preload-148043/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:44:39.523983  581234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:44:39.565188  581234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/certs/553473.pem --> /usr/share/ca-certificates/553473.pem (1338 bytes)
	I1109 14:44:39.602141  581234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem --> /usr/share/ca-certificates/5534732.pem (1708 bytes)
	I1109 14:44:39.644369  581234 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:44:39.671416  581234 ssh_runner.go:195] Run: openssl version
	I1109 14:44:39.679783  581234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5534732.pem && ln -fs /usr/share/ca-certificates/5534732.pem /etc/ssl/certs/5534732.pem"
	I1109 14:44:39.697494  581234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5534732.pem
	I1109 14:44:39.704415  581234 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:42 /usr/share/ca-certificates/5534732.pem
	I1109 14:44:39.704507  581234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5534732.pem
	I1109 14:44:39.714135  581234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5534732.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:44:39.731454  581234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:44:39.749607  581234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:44:39.756831  581234 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:44:39.756926  581234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:44:39.766351  581234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:44:39.784078  581234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/553473.pem && ln -fs /usr/share/ca-certificates/553473.pem /etc/ssl/certs/553473.pem"
	I1109 14:44:39.801785  581234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/553473.pem
	I1109 14:44:39.808976  581234 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:42 /usr/share/ca-certificates/553473.pem
	I1109 14:44:39.809071  581234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/553473.pem
	I1109 14:44:39.818238  581234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/553473.pem /etc/ssl/certs/51391683.0"
	I1109 14:44:39.836280  581234 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:44:39.843265  581234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:44:39.853132  581234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:44:39.862830  581234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:44:39.872313  581234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:44:39.882178  581234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:44:39.892287  581234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:44:39.903632  581234 kubeadm.go:401] StartCluster: {Name:test-preload-148043 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-148043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:44:39.903916  581234 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:44:39.904049  581234 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:44:39.960712  581234 cri.go:89] found id: ""
	I1109 14:44:39.960815  581234 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:44:39.977253  581234 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 14:44:39.977279  581234 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 14:44:39.977346  581234 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 14:44:39.996240  581234 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:44:39.996917  581234 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-148043" does not appear in /home/jenkins/minikube-integration/21139-549598/kubeconfig
	I1109 14:44:39.997107  581234 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-549598/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-148043" cluster setting kubeconfig missing "test-preload-148043" context setting]
	I1109 14:44:39.997375  581234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/kubeconfig: {Name:mka7e7e8d5d1d87facf220110c90862a74355591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:44:40.026511  581234 kapi.go:59] client config for test-preload-148043: &rest.Config{Host:"https://192.168.39.71:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-549598/.minikube/profiles/test-preload-148043/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-549598/.minikube/profiles/test-preload-148043/client.key", CAFile:"/home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 14:44:40.026998  581234 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1109 14:44:40.027021  581234 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1109 14:44:40.027027  581234 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1109 14:44:40.027031  581234 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1109 14:44:40.027035  581234 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1109 14:44:40.027405  581234 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 14:44:40.043451  581234 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.71
	I1109 14:44:40.043514  581234 kubeadm.go:1161] stopping kube-system containers ...
	I1109 14:44:40.043537  581234 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1109 14:44:40.043621  581234 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:44:40.096360  581234 cri.go:89] found id: ""
	I1109 14:44:40.096443  581234 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1109 14:44:40.119308  581234 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 14:44:40.134862  581234 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 14:44:40.134889  581234 kubeadm.go:158] found existing configuration files:
	
	I1109 14:44:40.134939  581234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 14:44:40.149670  581234 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 14:44:40.149752  581234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 14:44:40.169780  581234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 14:44:40.186945  581234 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 14:44:40.187017  581234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 14:44:40.203107  581234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 14:44:40.218848  581234 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 14:44:40.218929  581234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 14:44:40.235714  581234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 14:44:40.252388  581234 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 14:44:40.252467  581234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 14:44:40.269989  581234 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 14:44:40.287442  581234 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 14:44:40.365016  581234 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 14:44:41.330402  581234 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1109 14:44:41.622988  581234 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 14:44:41.729329  581234 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1109 14:44:41.844564  581234 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:44:41.844728  581234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:44:42.345864  581234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:44:42.845195  581234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:44:43.345507  581234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:44:43.385605  581234 api_server.go:72] duration metric: took 1.541052412s to wait for apiserver process to appear ...
	I1109 14:44:43.385649  581234 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:44:43.385700  581234 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1109 14:44:43.386323  581234 api_server.go:269] stopped: https://192.168.39.71:8443/healthz: Get "https://192.168.39.71:8443/healthz": dial tcp 192.168.39.71:8443: connect: connection refused
	I1109 14:44:43.886054  581234 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1109 14:44:46.043609  581234 api_server.go:279] https://192.168.39.71:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1109 14:44:46.043660  581234 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1109 14:44:46.043690  581234 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1109 14:44:46.133553  581234 api_server.go:279] https://192.168.39.71:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:44:46.133612  581234 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:44:46.386178  581234 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1109 14:44:46.391402  581234 api_server.go:279] https://192.168.39.71:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:44:46.391445  581234 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:44:46.885956  581234 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1109 14:44:46.893080  581234 api_server.go:279] https://192.168.39.71:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:44:46.893113  581234 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:44:47.385858  581234 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1109 14:44:47.401975  581234 api_server.go:279] https://192.168.39.71:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:44:47.402029  581234 api_server.go:103] status: https://192.168.39.71:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:44:47.885774  581234 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1109 14:44:47.897386  581234 api_server.go:279] https://192.168.39.71:8443/healthz returned 200:
	ok
	I1109 14:44:47.906651  581234 api_server.go:141] control plane version: v1.32.0
	I1109 14:44:47.906693  581234 api_server.go:131] duration metric: took 4.521034075s to wait for apiserver health ...
	I1109 14:44:47.906708  581234 cni.go:84] Creating CNI manager for ""
	I1109 14:44:47.906717  581234 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1109 14:44:47.908833  581234 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1109 14:44:47.910264  581234 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1109 14:44:47.927927  581234 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1109 14:44:48.016290  581234 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:44:48.038117  581234 system_pods.go:59] 7 kube-system pods found
	I1109 14:44:48.038185  581234 system_pods.go:61] "coredns-668d6bf9bc-5hp9k" [638e753e-4413-439a-8fc0-b7961c1560ca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:44:48.038197  581234 system_pods.go:61] "etcd-test-preload-148043" [00b73fc8-2f0a-4c5c-828f-1418f47cff42] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:44:48.038205  581234 system_pods.go:61] "kube-apiserver-test-preload-148043" [5d960734-e732-461a-b222-fcefff667f54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:44:48.038212  581234 system_pods.go:61] "kube-controller-manager-test-preload-148043" [545d158e-195a-412a-bea2-725464452838] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:44:48.038216  581234 system_pods.go:61] "kube-proxy-vj6jp" [9aa5a4d4-3411-435a-a313-86570e81ed0f] Running
	I1109 14:44:48.038222  581234 system_pods.go:61] "kube-scheduler-test-preload-148043" [63d90007-be44-4b80-aa18-c9c61670a3cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:44:48.038227  581234 system_pods.go:61] "storage-provisioner" [c0c311b1-9c1b-4b12-82a1-8d21d14c32b8] Running
	I1109 14:44:48.038233  581234 system_pods.go:74] duration metric: took 21.91553ms to wait for pod list to return data ...
	I1109 14:44:48.038241  581234 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:44:48.045903  581234 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1109 14:44:48.045941  581234 node_conditions.go:123] node cpu capacity is 2
	I1109 14:44:48.045957  581234 node_conditions.go:105] duration metric: took 7.710568ms to run NodePressure ...
	I1109 14:44:48.046017  581234 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 14:44:48.460333  581234 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1109 14:44:48.469722  581234 kubeadm.go:744] kubelet initialised
	I1109 14:44:48.469767  581234 kubeadm.go:745] duration metric: took 9.399551ms waiting for restarted kubelet to initialise ...
	I1109 14:44:48.469814  581234 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 14:44:48.498957  581234 ops.go:34] apiserver oom_adj: -16
	I1109 14:44:48.498991  581234 kubeadm.go:602] duration metric: took 8.521703644s to restartPrimaryControlPlane
	I1109 14:44:48.499005  581234 kubeadm.go:403] duration metric: took 8.595390552s to StartCluster
	I1109 14:44:48.499030  581234 settings.go:142] acquiring lock: {Name:mkb59fcf785d78efbba1217c69544ee37b77198f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:44:48.499120  581234 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-549598/kubeconfig
	I1109 14:44:48.499838  581234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/kubeconfig: {Name:mka7e7e8d5d1d87facf220110c90862a74355591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:44:48.500144  581234 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:44:48.500235  581234 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:44:48.500357  581234 addons.go:70] Setting storage-provisioner=true in profile "test-preload-148043"
	I1109 14:44:48.500380  581234 addons.go:239] Setting addon storage-provisioner=true in "test-preload-148043"
	W1109 14:44:48.500389  581234 addons.go:248] addon storage-provisioner should already be in state true
	I1109 14:44:48.500390  581234 config.go:182] Loaded profile config "test-preload-148043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1109 14:44:48.500396  581234 addons.go:70] Setting default-storageclass=true in profile "test-preload-148043"
	I1109 14:44:48.500435  581234 host.go:66] Checking if "test-preload-148043" exists ...
	I1109 14:44:48.500435  581234 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-148043"
	I1109 14:44:48.502409  581234 out.go:179] * Verifying Kubernetes components...
	I1109 14:44:48.503447  581234 kapi.go:59] client config for test-preload-148043: &rest.Config{Host:"https://192.168.39.71:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-549598/.minikube/profiles/test-preload-148043/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-549598/.minikube/profiles/test-preload-148043/client.key", CAFile:"/home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 14:44:48.503783  581234 addons.go:239] Setting addon default-storageclass=true in "test-preload-148043"
	W1109 14:44:48.503830  581234 addons.go:248] addon default-storageclass should already be in state true
	I1109 14:44:48.503863  581234 host.go:66] Checking if "test-preload-148043" exists ...
	I1109 14:44:48.503956  581234 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:44:48.503995  581234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:44:48.505847  581234 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:44:48.505880  581234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:44:48.506039  581234 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:44:48.506066  581234 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:44:48.510615  581234 main.go:143] libmachine: domain test-preload-148043 has defined MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:48.511189  581234 main.go:143] libmachine: domain test-preload-148043 has defined MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:48.511238  581234 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:85:af:21", ip: ""} in network mk-test-preload-148043: {Iface:virbr1 ExpiryTime:2025-11-09 15:44:27 +0000 UTC Type:0 Mac:52:54:00:85:af:21 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-148043 Clientid:01:52:54:00:85:af:21}
	I1109 14:44:48.511273  581234 main.go:143] libmachine: domain test-preload-148043 has defined IP address 192.168.39.71 and MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:48.511517  581234 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/test-preload-148043/id_rsa Username:docker}
	I1109 14:44:48.512121  581234 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:85:af:21", ip: ""} in network mk-test-preload-148043: {Iface:virbr1 ExpiryTime:2025-11-09 15:44:27 +0000 UTC Type:0 Mac:52:54:00:85:af:21 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:test-preload-148043 Clientid:01:52:54:00:85:af:21}
	I1109 14:44:48.512166  581234 main.go:143] libmachine: domain test-preload-148043 has defined IP address 192.168.39.71 and MAC address 52:54:00:85:af:21 in network mk-test-preload-148043
	I1109 14:44:48.512470  581234 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/test-preload-148043/id_rsa Username:docker}
	I1109 14:44:48.745863  581234 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:44:48.779446  581234 node_ready.go:35] waiting up to 6m0s for node "test-preload-148043" to be "Ready" ...
	I1109 14:44:48.783634  581234 node_ready.go:49] node "test-preload-148043" is "Ready"
	I1109 14:44:48.783694  581234 node_ready.go:38] duration metric: took 4.197648ms for node "test-preload-148043" to be "Ready" ...
	I1109 14:44:48.783717  581234 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:44:48.783817  581234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:44:48.807052  581234 api_server.go:72] duration metric: took 306.866054ms to wait for apiserver process to appear ...
	I1109 14:44:48.807092  581234 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:44:48.807123  581234 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1109 14:44:48.813503  581234 api_server.go:279] https://192.168.39.71:8443/healthz returned 200:
	ok
	I1109 14:44:48.815039  581234 api_server.go:141] control plane version: v1.32.0
	I1109 14:44:48.815071  581234 api_server.go:131] duration metric: took 7.969814ms to wait for apiserver health ...
	I1109 14:44:48.815083  581234 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:44:48.821229  581234 system_pods.go:59] 7 kube-system pods found
	I1109 14:44:48.821272  581234 system_pods.go:61] "coredns-668d6bf9bc-5hp9k" [638e753e-4413-439a-8fc0-b7961c1560ca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:44:48.821281  581234 system_pods.go:61] "etcd-test-preload-148043" [00b73fc8-2f0a-4c5c-828f-1418f47cff42] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:44:48.821290  581234 system_pods.go:61] "kube-apiserver-test-preload-148043" [5d960734-e732-461a-b222-fcefff667f54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:44:48.821297  581234 system_pods.go:61] "kube-controller-manager-test-preload-148043" [545d158e-195a-412a-bea2-725464452838] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:44:48.821302  581234 system_pods.go:61] "kube-proxy-vj6jp" [9aa5a4d4-3411-435a-a313-86570e81ed0f] Running
	I1109 14:44:48.821307  581234 system_pods.go:61] "kube-scheduler-test-preload-148043" [63d90007-be44-4b80-aa18-c9c61670a3cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:44:48.821311  581234 system_pods.go:61] "storage-provisioner" [c0c311b1-9c1b-4b12-82a1-8d21d14c32b8] Running
	I1109 14:44:48.821318  581234 system_pods.go:74] duration metric: took 6.227834ms to wait for pod list to return data ...
	I1109 14:44:48.821327  581234 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:44:48.824948  581234 default_sa.go:45] found service account: "default"
	I1109 14:44:48.824993  581234 default_sa.go:55] duration metric: took 3.658232ms for default service account to be created ...
	I1109 14:44:48.825006  581234 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:44:48.828429  581234 system_pods.go:86] 7 kube-system pods found
	I1109 14:44:48.828470  581234 system_pods.go:89] "coredns-668d6bf9bc-5hp9k" [638e753e-4413-439a-8fc0-b7961c1560ca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:44:48.828479  581234 system_pods.go:89] "etcd-test-preload-148043" [00b73fc8-2f0a-4c5c-828f-1418f47cff42] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:44:48.828490  581234 system_pods.go:89] "kube-apiserver-test-preload-148043" [5d960734-e732-461a-b222-fcefff667f54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:44:48.828500  581234 system_pods.go:89] "kube-controller-manager-test-preload-148043" [545d158e-195a-412a-bea2-725464452838] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:44:48.828506  581234 system_pods.go:89] "kube-proxy-vj6jp" [9aa5a4d4-3411-435a-a313-86570e81ed0f] Running
	I1109 14:44:48.828514  581234 system_pods.go:89] "kube-scheduler-test-preload-148043" [63d90007-be44-4b80-aa18-c9c61670a3cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:44:48.828521  581234 system_pods.go:89] "storage-provisioner" [c0c311b1-9c1b-4b12-82a1-8d21d14c32b8] Running
	I1109 14:44:48.828531  581234 system_pods.go:126] duration metric: took 3.517207ms to wait for k8s-apps to be running ...
	I1109 14:44:48.828547  581234 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:44:48.828595  581234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:44:48.849689  581234 system_svc.go:56] duration metric: took 21.125228ms WaitForService to wait for kubelet
	I1109 14:44:48.849741  581234 kubeadm.go:587] duration metric: took 349.563626ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:44:48.849773  581234 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:44:48.852936  581234 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1109 14:44:48.852980  581234 node_conditions.go:123] node cpu capacity is 2
	I1109 14:44:48.852996  581234 node_conditions.go:105] duration metric: took 3.216232ms to run NodePressure ...
	I1109 14:44:48.853014  581234 start.go:242] waiting for startup goroutines ...
	I1109 14:44:48.990031  581234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:44:48.993516  581234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:44:49.869145  581234 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1109 14:44:49.870761  581234 addons.go:515] duration metric: took 1.370542627s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1109 14:44:49.870863  581234 start.go:247] waiting for cluster config update ...
	I1109 14:44:49.870888  581234 start.go:256] writing updated cluster config ...
	I1109 14:44:49.871193  581234 ssh_runner.go:195] Run: rm -f paused
	I1109 14:44:49.878572  581234 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:44:49.879146  581234 kapi.go:59] client config for test-preload-148043: &rest.Config{Host:"https://192.168.39.71:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-549598/.minikube/profiles/test-preload-148043/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-549598/.minikube/profiles/test-preload-148043/client.key", CAFile:"/home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 14:44:49.883901  581234 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-5hp9k" in "kube-system" namespace to be "Ready" or be gone ...
	W1109 14:44:51.891330  581234 pod_ready.go:104] pod "coredns-668d6bf9bc-5hp9k" is not "Ready", error: <nil>
	W1109 14:44:54.391407  581234 pod_ready.go:104] pod "coredns-668d6bf9bc-5hp9k" is not "Ready", error: <nil>
	W1109 14:44:56.393093  581234 pod_ready.go:104] pod "coredns-668d6bf9bc-5hp9k" is not "Ready", error: <nil>
	W1109 14:44:58.394487  581234 pod_ready.go:104] pod "coredns-668d6bf9bc-5hp9k" is not "Ready", error: <nil>
	I1109 14:44:58.900558  581234 pod_ready.go:94] pod "coredns-668d6bf9bc-5hp9k" is "Ready"
	I1109 14:44:58.900597  581234 pod_ready.go:86] duration metric: took 9.016650928s for pod "coredns-668d6bf9bc-5hp9k" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:44:58.906104  581234 pod_ready.go:83] waiting for pod "etcd-test-preload-148043" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:44:58.923012  581234 pod_ready.go:94] pod "etcd-test-preload-148043" is "Ready"
	I1109 14:44:58.923051  581234 pod_ready.go:86] duration metric: took 16.912545ms for pod "etcd-test-preload-148043" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:44:59.006428  581234 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-148043" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:44:59.013884  581234 pod_ready.go:94] pod "kube-apiserver-test-preload-148043" is "Ready"
	I1109 14:44:59.013924  581234 pod_ready.go:86] duration metric: took 7.46419ms for pod "kube-apiserver-test-preload-148043" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:44:59.017289  581234 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-148043" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:44:59.087993  581234 pod_ready.go:94] pod "kube-controller-manager-test-preload-148043" is "Ready"
	I1109 14:44:59.088029  581234 pod_ready.go:86] duration metric: took 70.709172ms for pod "kube-controller-manager-test-preload-148043" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:44:59.288186  581234 pod_ready.go:83] waiting for pod "kube-proxy-vj6jp" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:44:59.689050  581234 pod_ready.go:94] pod "kube-proxy-vj6jp" is "Ready"
	I1109 14:44:59.689092  581234 pod_ready.go:86] duration metric: took 400.869963ms for pod "kube-proxy-vj6jp" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:44:59.887934  581234 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-148043" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:45:00.289738  581234 pod_ready.go:94] pod "kube-scheduler-test-preload-148043" is "Ready"
	I1109 14:45:00.289772  581234 pod_ready.go:86] duration metric: took 401.810633ms for pod "kube-scheduler-test-preload-148043" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:45:00.289784  581234 pod_ready.go:40] duration metric: took 10.411172535s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:45:00.338904  581234 start.go:628] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1109 14:45:00.340771  581234 out.go:203] 
	W1109 14:45:00.342045  581234 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1109 14:45:00.343135  581234 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1109 14:45:00.344312  581234 out.go:179] * Done! kubectl is now configured to use "test-preload-148043" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 14:45:01 test-preload-148043 crio[845]: time="2025-11-09 14:45:01.270849248Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762699501270815592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=53e14099-4fa7-49c1-b6e1-f7209a6edd05 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 14:45:01 test-preload-148043 crio[845]: time="2025-11-09 14:45:01.272592604Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b33218d-1df6-4cce-bc8f-049b2eb1b052 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 14:45:01 test-preload-148043 crio[845]: time="2025-11-09 14:45:01.272712272Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b33218d-1df6-4cce-bc8f-049b2eb1b052 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 14:45:01 test-preload-148043 crio[845]: time="2025-11-09 14:45:01.272885786Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c359a8b3d3295d01472dd1e74f493148a00e2208dafc86cb494d544f9c38be3c,PodSandboxId:4ad577d09a1c555dddf5e738c0f598f202eb38fe89d9b6a78a54e8fe63f5a30b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1762699490883221774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5hp9k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638e753e-4413-439a-8fc0-b7961c1560ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97bbbc3f3961b36cbf27715a4f0244dc7a468215345b9a70b5784687a95ffd2f,PodSandboxId:7f33761aee3612a1a66485bb6c23747fdf859e14602f1c184a346ee5de4716b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1762699487325834389,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj6jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9aa5a4d4-3411-435a-a313-86570e81ed0f,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7abe21170136dedb81c900b4e2832ef54c7def7773ce42ed7a0616763e9f0ea,PodSandboxId:5b7e34544ce122b277de97d26528682da27200e466d726363e6e6a76e5c57256,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762699487383882049,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0
c311b1-9c1b-4b12-82a1-8d21d14c32b8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:950e8837a507864bafabc82f27e9ff6041bb5fcbc435da8ae826cbbd2b42709f,PodSandboxId:dc508001ee21d4a0720bd8ed95b349dbf56ed2034954afb0287832dd8f76cfbd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1762699482905165685,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-148043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a92f7430
51ae5e7d8c0b206043b7984,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d79623942d9aa1713935ea19b6ac2ff3a4b2b05d24ea0127faf4eb08cce4c2f4,PodSandboxId:3061e3a21b95d8a20665cdf0b0bda5369d6e4bc2e540972f70deab8ad4e372a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1762699482880255994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-148043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2332ce16ff2b4d344dfc
9b075bce0031,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eedefadb8f7aa4c27d9d116532b803df9af0e443edf30115a73a503337114d1,PodSandboxId:1ce5a60938b725343cee6671214da60de42ba7c5f72e1c7ecbba072637230ac8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1762699482866809126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-148043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7fb4f92433382541d8897db5e4203e7,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799a50efe347af9fe9353495afec3b8692227d5f2a031c64c7a8abad835aa648,PodSandboxId:8535247d5de611e817f54b2bf3ca73abf537a3b49aada0dd4e27cb3024da01de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1762699482852103519,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-148043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6264233773a63736e2a45eac9bfb73d9,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b33218d-1df6-4cce-bc8f-049b2eb1b052 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 14:45:01 test-preload-148043 crio[845]: time="2025-11-09 14:45:01.323640837Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e16eba2b-83a7-43d7-8973-edd7e16b86bc name=/runtime.v1.RuntimeService/Version
	Nov 09 14:45:01 test-preload-148043 crio[845]: time="2025-11-09 14:45:01.323712365Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e16eba2b-83a7-43d7-8973-edd7e16b86bc name=/runtime.v1.RuntimeService/Version
	Nov 09 14:45:01 test-preload-148043 crio[845]: time="2025-11-09 14:45:01.325843062Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b6978b44-ba33-4573-875c-dd9f36536e01 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 14:45:01 test-preload-148043 crio[845]: time="2025-11-09 14:45:01.326379416Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762699501326354452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6978b44-ba33-4573-875c-dd9f36536e01 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 14:45:01 test-preload-148043 crio[845]: time="2025-11-09 14:45:01.327484609Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a9dfdbd-4b6e-41c6-914a-47796694c300 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 14:45:01 test-preload-148043 crio[845]: time="2025-11-09 14:45:01.327700223Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a9dfdbd-4b6e-41c6-914a-47796694c300 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 14:45:01 test-preload-148043 crio[845]: time="2025-11-09 14:45:01.328023148Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c359a8b3d3295d01472dd1e74f493148a00e2208dafc86cb494d544f9c38be3c,PodSandboxId:4ad577d09a1c555dddf5e738c0f598f202eb38fe89d9b6a78a54e8fe63f5a30b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1762699490883221774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5hp9k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638e753e-4413-439a-8fc0-b7961c1560ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97bbbc3f3961b36cbf27715a4f0244dc7a468215345b9a70b5784687a95ffd2f,PodSandboxId:7f33761aee3612a1a66485bb6c23747fdf859e14602f1c184a346ee5de4716b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1762699487325834389,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj6jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9aa5a4d4-3411-435a-a313-86570e81ed0f,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7abe21170136dedb81c900b4e2832ef54c7def7773ce42ed7a0616763e9f0ea,PodSandboxId:5b7e34544ce122b277de97d26528682da27200e466d726363e6e6a76e5c57256,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762699487383882049,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0
c311b1-9c1b-4b12-82a1-8d21d14c32b8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:950e8837a507864bafabc82f27e9ff6041bb5fcbc435da8ae826cbbd2b42709f,PodSandboxId:dc508001ee21d4a0720bd8ed95b349dbf56ed2034954afb0287832dd8f76cfbd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1762699482905165685,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-148043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a92f7430
51ae5e7d8c0b206043b7984,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d79623942d9aa1713935ea19b6ac2ff3a4b2b05d24ea0127faf4eb08cce4c2f4,PodSandboxId:3061e3a21b95d8a20665cdf0b0bda5369d6e4bc2e540972f70deab8ad4e372a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1762699482880255994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-148043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2332ce16ff2b4d344dfc
9b075bce0031,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eedefadb8f7aa4c27d9d116532b803df9af0e443edf30115a73a503337114d1,PodSandboxId:1ce5a60938b725343cee6671214da60de42ba7c5f72e1c7ecbba072637230ac8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1762699482866809126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-148043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7fb4f92433382541d8897db5e4203e7,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799a50efe347af9fe9353495afec3b8692227d5f2a031c64c7a8abad835aa648,PodSandboxId:8535247d5de611e817f54b2bf3ca73abf537a3b49aada0dd4e27cb3024da01de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1762699482852103519,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-148043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6264233773a63736e2a45eac9bfb73d9,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5a9dfdbd-4b6e-41c6-914a-47796694c300 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 14:45:01 test-preload-148043 crio[845]: time="2025-11-09 14:45:01.375465076Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b5467b67-975f-48a5-a29b-91616396d365 name=/runtime.v1.RuntimeService/Version
	Nov 09 14:45:01 test-preload-148043 crio[845]: time="2025-11-09 14:45:01.375558491Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b5467b67-975f-48a5-a29b-91616396d365 name=/runtime.v1.RuntimeService/Version
	Nov 09 14:45:01 test-preload-148043 crio[845]: time="2025-11-09 14:45:01.377512865Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=090af0bb-79c2-4446-aa7c-aa2389035b11 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 14:45:01 test-preload-148043 crio[845]: time="2025-11-09 14:45:01.378020165Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762699501377992158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=090af0bb-79c2-4446-aa7c-aa2389035b11 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 14:45:01 test-preload-148043 crio[845]: time="2025-11-09 14:45:01.379303359Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd561c7b-3bee-4113-bc89-1f2e2f49c64d name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 14:45:01 test-preload-148043 crio[845]: time="2025-11-09 14:45:01.379396548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd561c7b-3bee-4113-bc89-1f2e2f49c64d name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 14:45:01 test-preload-148043 crio[845]: time="2025-11-09 14:45:01.379566120Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c359a8b3d3295d01472dd1e74f493148a00e2208dafc86cb494d544f9c38be3c,PodSandboxId:4ad577d09a1c555dddf5e738c0f598f202eb38fe89d9b6a78a54e8fe63f5a30b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1762699490883221774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5hp9k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638e753e-4413-439a-8fc0-b7961c1560ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97bbbc3f3961b36cbf27715a4f0244dc7a468215345b9a70b5784687a95ffd2f,PodSandboxId:7f33761aee3612a1a66485bb6c23747fdf859e14602f1c184a346ee5de4716b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1762699487325834389,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj6jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9aa5a4d4-3411-435a-a313-86570e81ed0f,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7abe21170136dedb81c900b4e2832ef54c7def7773ce42ed7a0616763e9f0ea,PodSandboxId:5b7e34544ce122b277de97d26528682da27200e466d726363e6e6a76e5c57256,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762699487383882049,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0
c311b1-9c1b-4b12-82a1-8d21d14c32b8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:950e8837a507864bafabc82f27e9ff6041bb5fcbc435da8ae826cbbd2b42709f,PodSandboxId:dc508001ee21d4a0720bd8ed95b349dbf56ed2034954afb0287832dd8f76cfbd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1762699482905165685,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-148043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a92f7430
51ae5e7d8c0b206043b7984,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d79623942d9aa1713935ea19b6ac2ff3a4b2b05d24ea0127faf4eb08cce4c2f4,PodSandboxId:3061e3a21b95d8a20665cdf0b0bda5369d6e4bc2e540972f70deab8ad4e372a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1762699482880255994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-148043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2332ce16ff2b4d344dfc
9b075bce0031,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eedefadb8f7aa4c27d9d116532b803df9af0e443edf30115a73a503337114d1,PodSandboxId:1ce5a60938b725343cee6671214da60de42ba7c5f72e1c7ecbba072637230ac8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1762699482866809126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-148043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7fb4f92433382541d8897db5e4203e7,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799a50efe347af9fe9353495afec3b8692227d5f2a031c64c7a8abad835aa648,PodSandboxId:8535247d5de611e817f54b2bf3ca73abf537a3b49aada0dd4e27cb3024da01de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1762699482852103519,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-148043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6264233773a63736e2a45eac9bfb73d9,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd561c7b-3bee-4113-bc89-1f2e2f49c64d name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 14:45:01 test-preload-148043 crio[845]: time="2025-11-09 14:45:01.423053970Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=04e3c37c-63a6-4ccd-aa66-47adf34cc2fe name=/runtime.v1.RuntimeService/Version
	Nov 09 14:45:01 test-preload-148043 crio[845]: time="2025-11-09 14:45:01.423644309Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=04e3c37c-63a6-4ccd-aa66-47adf34cc2fe name=/runtime.v1.RuntimeService/Version
	Nov 09 14:45:01 test-preload-148043 crio[845]: time="2025-11-09 14:45:01.425510200Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=131a6c72-bd12-4093-abf7-64a46a1f7f13 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 14:45:01 test-preload-148043 crio[845]: time="2025-11-09 14:45:01.426071443Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762699501426023991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=131a6c72-bd12-4093-abf7-64a46a1f7f13 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 09 14:45:01 test-preload-148043 crio[845]: time="2025-11-09 14:45:01.427095645Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d854336d-59bb-4e5a-9574-892a4bb79642 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 14:45:01 test-preload-148043 crio[845]: time="2025-11-09 14:45:01.427179478Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d854336d-59bb-4e5a-9574-892a4bb79642 name=/runtime.v1.RuntimeService/ListContainers
	Nov 09 14:45:01 test-preload-148043 crio[845]: time="2025-11-09 14:45:01.427401310Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c359a8b3d3295d01472dd1e74f493148a00e2208dafc86cb494d544f9c38be3c,PodSandboxId:4ad577d09a1c555dddf5e738c0f598f202eb38fe89d9b6a78a54e8fe63f5a30b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1762699490883221774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5hp9k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638e753e-4413-439a-8fc0-b7961c1560ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97bbbc3f3961b36cbf27715a4f0244dc7a468215345b9a70b5784687a95ffd2f,PodSandboxId:7f33761aee3612a1a66485bb6c23747fdf859e14602f1c184a346ee5de4716b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1762699487325834389,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj6jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9aa5a4d4-3411-435a-a313-86570e81ed0f,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7abe21170136dedb81c900b4e2832ef54c7def7773ce42ed7a0616763e9f0ea,PodSandboxId:5b7e34544ce122b277de97d26528682da27200e466d726363e6e6a76e5c57256,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762699487383882049,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0
c311b1-9c1b-4b12-82a1-8d21d14c32b8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:950e8837a507864bafabc82f27e9ff6041bb5fcbc435da8ae826cbbd2b42709f,PodSandboxId:dc508001ee21d4a0720bd8ed95b349dbf56ed2034954afb0287832dd8f76cfbd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1762699482905165685,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-148043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a92f7430
51ae5e7d8c0b206043b7984,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d79623942d9aa1713935ea19b6ac2ff3a4b2b05d24ea0127faf4eb08cce4c2f4,PodSandboxId:3061e3a21b95d8a20665cdf0b0bda5369d6e4bc2e540972f70deab8ad4e372a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1762699482880255994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-148043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2332ce16ff2b4d344dfc
9b075bce0031,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eedefadb8f7aa4c27d9d116532b803df9af0e443edf30115a73a503337114d1,PodSandboxId:1ce5a60938b725343cee6671214da60de42ba7c5f72e1c7ecbba072637230ac8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1762699482866809126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-148043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7fb4f92433382541d8897db5e4203e7,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799a50efe347af9fe9353495afec3b8692227d5f2a031c64c7a8abad835aa648,PodSandboxId:8535247d5de611e817f54b2bf3ca73abf537a3b49aada0dd4e27cb3024da01de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1762699482852103519,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-148043,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6264233773a63736e2a45eac9bfb73d9,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d854336d-59bb-4e5a-9574-892a4bb79642 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c359a8b3d3295       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   10 seconds ago      Running             coredns                   1                   4ad577d09a1c5       coredns-668d6bf9bc-5hp9k
	b7abe21170136       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   5b7e34544ce12       storage-provisioner
	97bbbc3f3961b       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   14 seconds ago      Running             kube-proxy                1                   7f33761aee361       kube-proxy-vj6jp
	950e8837a5078       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   18 seconds ago      Running             kube-scheduler            1                   dc508001ee21d       kube-scheduler-test-preload-148043
	d79623942d9aa       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   18 seconds ago      Running             kube-apiserver            1                   3061e3a21b95d       kube-apiserver-test-preload-148043
	1eedefadb8f7a       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   18 seconds ago      Running             etcd                      1                   1ce5a60938b72       etcd-test-preload-148043
	799a50efe347a       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   18 seconds ago      Running             kube-controller-manager   1                   8535247d5de61       kube-controller-manager-test-preload-148043
	
	
	==> coredns [c359a8b3d3295d01472dd1e74f493148a00e2208dafc86cb494d544f9c38be3c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59125 - 34174 "HINFO IN 2766880864299512419.966779080809588915. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.025783996s
	
	
	==> describe nodes <==
	Name:               test-preload-148043
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-148043
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=test-preload-148043
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_43_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:43:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-148043
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:44:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:44:48 +0000   Sun, 09 Nov 2025 14:43:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:44:48 +0000   Sun, 09 Nov 2025 14:43:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:44:48 +0000   Sun, 09 Nov 2025 14:43:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:44:48 +0000   Sun, 09 Nov 2025 14:44:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.71
	  Hostname:    test-preload-148043
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 850ee72be0cc466aba54a1663bc970a1
	  System UUID:                850ee72b-e0cc-466a-ba54-a1663bc970a1
	  Boot ID:                    cf577ec5-2fbd-4b56-9542-4dd35af236bd
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-5hp9k                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     98s
	  kube-system                 etcd-test-preload-148043                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         105s
	  kube-system                 kube-apiserver-test-preload-148043             250m (12%)    0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-controller-manager-test-preload-148043    200m (10%)    0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-proxy-vj6jp                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-scheduler-test-preload-148043             100m (5%)     0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 96s                  kube-proxy       
	  Normal   Starting                 13s                  kube-proxy       
	  Normal   Starting                 111s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  111s (x8 over 111s)  kubelet          Node test-preload-148043 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    111s (x8 over 111s)  kubelet          Node test-preload-148043 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     111s (x7 over 111s)  kubelet          Node test-preload-148043 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  111s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    103s                 kubelet          Node test-preload-148043 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  103s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  103s                 kubelet          Node test-preload-148043 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     103s                 kubelet          Node test-preload-148043 status is now: NodeHasSufficientPID
	  Normal   Starting                 103s                 kubelet          Starting kubelet.
	  Normal   NodeReady                102s                 kubelet          Node test-preload-148043 status is now: NodeReady
	  Normal   RegisteredNode           99s                  node-controller  Node test-preload-148043 event: Registered Node test-preload-148043 in Controller
	  Normal   Starting                 20s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  20s (x8 over 20s)    kubelet          Node test-preload-148043 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20s (x8 over 20s)    kubelet          Node test-preload-148043 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20s (x7 over 20s)    kubelet          Node test-preload-148043 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  20s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 15s                  kubelet          Node test-preload-148043 has been rebooted, boot id: cf577ec5-2fbd-4b56-9542-4dd35af236bd
	  Normal   RegisteredNode           12s                  node-controller  Node test-preload-148043 event: Registered Node test-preload-148043 in Controller
	
	
	==> dmesg <==
	[Nov 9 14:44] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000060] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005079] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.953121] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.096540] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.114053] kauditd_printk_skb: 74 callbacks suppressed
	[  +5.710333] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.039182] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [1eedefadb8f7aa4c27d9d116532b803df9af0e443edf30115a73a503337114d1] <==
	{"level":"info","ts":"2025-11-09T14:44:43.355190Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-09T14:44:43.364113Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-09T14:44:43.364136Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-09T14:44:43.364238Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-09T14:44:43.371573Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-09T14:44:43.375311Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"226d7ac4e2309206","initial-advertise-peer-urls":["https://192.168.39.71:2380"],"listen-peer-urls":["https://192.168.39.71:2380"],"advertise-client-urls":["https://192.168.39.71:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.71:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-09T14:44:43.375417Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-09T14:44:43.375521Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.71:2380"}
	{"level":"info","ts":"2025-11-09T14:44:43.375543Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.71:2380"}
	{"level":"info","ts":"2025-11-09T14:44:44.471805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-09T14:44:44.471871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-09T14:44:44.471968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 received MsgPreVoteResp from 226d7ac4e2309206 at term 2"}
	{"level":"info","ts":"2025-11-09T14:44:44.471984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 became candidate at term 3"}
	{"level":"info","ts":"2025-11-09T14:44:44.471995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 received MsgVoteResp from 226d7ac4e2309206 at term 3"}
	{"level":"info","ts":"2025-11-09T14:44:44.472003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"226d7ac4e2309206 became leader at term 3"}
	{"level":"info","ts":"2025-11-09T14:44:44.472010Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 226d7ac4e2309206 elected leader 226d7ac4e2309206 at term 3"}
	{"level":"info","ts":"2025-11-09T14:44:44.473809Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"226d7ac4e2309206","local-member-attributes":"{Name:test-preload-148043 ClientURLs:[https://192.168.39.71:2379]}","request-path":"/0/members/226d7ac4e2309206/attributes","cluster-id":"98fbf1e9ed6d9a6e","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-09T14:44:44.473993Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-09T14:44:44.474088Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-09T14:44:44.474267Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-09T14:44:44.475981Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-09T14:44:44.475583Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-09T14:44:44.476654Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-09T14:44:44.477426Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-09T14:44:44.478575Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.71:2379"}
	
	
	==> kernel <==
	 14:45:01 up 0 min,  0 users,  load average: 1.59, 0.46, 0.16
	Linux test-preload-148043 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Nov  1 20:49:51 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [d79623942d9aa1713935ea19b6ac2ff3a4b2b05d24ea0127faf4eb08cce4c2f4] <==
	I1109 14:44:46.085430       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1109 14:44:46.099322       1 aggregator.go:171] initial CRD sync complete...
	I1109 14:44:46.099398       1 autoregister_controller.go:144] Starting autoregister controller
	I1109 14:44:46.099418       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1109 14:44:46.099435       1 cache.go:39] Caches are synced for autoregister controller
	I1109 14:44:46.123806       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1109 14:44:46.133627       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 14:44:46.138360       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1109 14:44:46.138475       1 policy_source.go:240] refreshing policies
	I1109 14:44:46.141028       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1109 14:44:46.149781       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1109 14:44:46.149804       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1109 14:44:46.153237       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1109 14:44:46.153654       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1109 14:44:46.174661       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1109 14:44:46.188997       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:44:46.781676       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1109 14:44:46.947549       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:44:48.235333       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1109 14:44:48.311646       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1109 14:44:48.380371       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:44:48.391745       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:44:49.377748       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:44:49.485338       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1109 14:44:49.665743       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [799a50efe347af9fe9353495afec3b8692227d5f2a031c64c7a8abad835aa648] <==
	I1109 14:44:49.339954       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1109 14:44:49.344557       1 shared_informer.go:320] Caches are synced for GC
	I1109 14:44:49.346595       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1109 14:44:49.350340       1 shared_informer.go:320] Caches are synced for endpoint
	I1109 14:44:49.352326       1 shared_informer.go:320] Caches are synced for job
	I1109 14:44:49.358016       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1109 14:44:49.361352       1 shared_informer.go:320] Caches are synced for daemon sets
	I1109 14:44:49.365188       1 shared_informer.go:320] Caches are synced for crt configmap
	I1109 14:44:49.365387       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1109 14:44:49.365624       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1109 14:44:49.365668       1 shared_informer.go:320] Caches are synced for ephemeral
	I1109 14:44:49.365834       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1109 14:44:49.366385       1 shared_informer.go:320] Caches are synced for taint
	I1109 14:44:49.366506       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1109 14:44:49.366612       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-148043"
	I1109 14:44:49.366655       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1109 14:44:49.366695       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1109 14:44:49.366728       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1109 14:44:49.371888       1 shared_informer.go:320] Caches are synced for HPA
	I1109 14:44:49.396003       1 shared_informer.go:320] Caches are synced for garbage collector
	I1109 14:44:49.503068       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="179.462591ms"
	I1109 14:44:49.503224       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="103.715µs"
	I1109 14:44:51.964873       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="55.519µs"
	I1109 14:44:58.883800       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="26.987099ms"
	I1109 14:44:58.884022       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="122.82µs"
	
	
	==> kube-proxy [97bbbc3f3961b36cbf27715a4f0244dc7a468215345b9a70b5784687a95ffd2f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1109 14:44:47.745627       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1109 14:44:47.761709       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.71"]
	E1109 14:44:47.761811       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:44:47.821476       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1109 14:44:47.821647       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1109 14:44:47.821771       1 server_linux.go:170] "Using iptables Proxier"
	I1109 14:44:47.826139       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:44:47.826772       1 server.go:497] "Version info" version="v1.32.0"
	I1109 14:44:47.827104       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:44:47.830099       1 config.go:199] "Starting service config controller"
	I1109 14:44:47.830263       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1109 14:44:47.830400       1 config.go:105] "Starting endpoint slice config controller"
	I1109 14:44:47.830460       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1109 14:44:47.831477       1 config.go:329] "Starting node config controller"
	I1109 14:44:47.831550       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1109 14:44:47.932094       1 shared_informer.go:320] Caches are synced for node config
	I1109 14:44:47.932150       1 shared_informer.go:320] Caches are synced for service config
	I1109 14:44:47.932160       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [950e8837a507864bafabc82f27e9ff6041bb5fcbc435da8ae826cbbd2b42709f] <==
	I1109 14:44:44.337138       1 serving.go:386] Generated self-signed cert in-memory
	W1109 14:44:46.025279       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 14:44:46.025326       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 14:44:46.025336       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 14:44:46.025349       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 14:44:46.090343       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1109 14:44:46.090408       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:44:46.094565       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:44:46.094711       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1109 14:44:46.094732       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 14:44:46.096096       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1109 14:44:46.199004       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 09 14:44:46 test-preload-148043 kubelet[1178]: I1109 14:44:46.237126    1178 setters.go:602] "Node became not ready" node="test-preload-148043" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-09T14:44:46Z","lastTransitionTime":"2025-11-09T14:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Nov 09 14:44:46 test-preload-148043 kubelet[1178]: E1109 14:44:46.240100    1178 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-148043\" already exists" pod="kube-system/kube-controller-manager-test-preload-148043"
	Nov 09 14:44:46 test-preload-148043 kubelet[1178]: I1109 14:44:46.240126    1178 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-148043"
	Nov 09 14:44:46 test-preload-148043 kubelet[1178]: E1109 14:44:46.256291    1178 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-148043\" already exists" pod="kube-system/kube-scheduler-test-preload-148043"
	Nov 09 14:44:46 test-preload-148043 kubelet[1178]: I1109 14:44:46.256362    1178 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-148043"
	Nov 09 14:44:46 test-preload-148043 kubelet[1178]: E1109 14:44:46.281094    1178 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-148043\" already exists" pod="kube-system/etcd-test-preload-148043"
	Nov 09 14:44:46 test-preload-148043 kubelet[1178]: I1109 14:44:46.687627    1178 apiserver.go:52] "Watching apiserver"
	Nov 09 14:44:46 test-preload-148043 kubelet[1178]: E1109 14:44:46.693526    1178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-5hp9k" podUID="638e753e-4413-439a-8fc0-b7961c1560ca"
	Nov 09 14:44:46 test-preload-148043 kubelet[1178]: I1109 14:44:46.710893    1178 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Nov 09 14:44:46 test-preload-148043 kubelet[1178]: I1109 14:44:46.771051    1178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9aa5a4d4-3411-435a-a313-86570e81ed0f-xtables-lock\") pod \"kube-proxy-vj6jp\" (UID: \"9aa5a4d4-3411-435a-a313-86570e81ed0f\") " pod="kube-system/kube-proxy-vj6jp"
	Nov 09 14:44:46 test-preload-148043 kubelet[1178]: I1109 14:44:46.771111    1178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9aa5a4d4-3411-435a-a313-86570e81ed0f-lib-modules\") pod \"kube-proxy-vj6jp\" (UID: \"9aa5a4d4-3411-435a-a313-86570e81ed0f\") " pod="kube-system/kube-proxy-vj6jp"
	Nov 09 14:44:46 test-preload-148043 kubelet[1178]: I1109 14:44:46.771149    1178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c0c311b1-9c1b-4b12-82a1-8d21d14c32b8-tmp\") pod \"storage-provisioner\" (UID: \"c0c311b1-9c1b-4b12-82a1-8d21d14c32b8\") " pod="kube-system/storage-provisioner"
	Nov 09 14:44:46 test-preload-148043 kubelet[1178]: E1109 14:44:46.772292    1178 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 09 14:44:46 test-preload-148043 kubelet[1178]: E1109 14:44:46.772402    1178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/638e753e-4413-439a-8fc0-b7961c1560ca-config-volume podName:638e753e-4413-439a-8fc0-b7961c1560ca nodeName:}" failed. No retries permitted until 2025-11-09 14:44:47.272371996 +0000 UTC m=+5.712083948 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/638e753e-4413-439a-8fc0-b7961c1560ca-config-volume") pod "coredns-668d6bf9bc-5hp9k" (UID: "638e753e-4413-439a-8fc0-b7961c1560ca") : object "kube-system"/"coredns" not registered
	Nov 09 14:44:47 test-preload-148043 kubelet[1178]: E1109 14:44:47.275155    1178 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 09 14:44:47 test-preload-148043 kubelet[1178]: E1109 14:44:47.275242    1178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/638e753e-4413-439a-8fc0-b7961c1560ca-config-volume podName:638e753e-4413-439a-8fc0-b7961c1560ca nodeName:}" failed. No retries permitted until 2025-11-09 14:44:48.275227409 +0000 UTC m=+6.714939350 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/638e753e-4413-439a-8fc0-b7961c1560ca-config-volume") pod "coredns-668d6bf9bc-5hp9k" (UID: "638e753e-4413-439a-8fc0-b7961c1560ca") : object "kube-system"/"coredns" not registered
	Nov 09 14:44:48 test-preload-148043 kubelet[1178]: I1109 14:44:48.268258    1178 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Nov 09 14:44:48 test-preload-148043 kubelet[1178]: E1109 14:44:48.286485    1178 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 09 14:44:48 test-preload-148043 kubelet[1178]: E1109 14:44:48.286573    1178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/638e753e-4413-439a-8fc0-b7961c1560ca-config-volume podName:638e753e-4413-439a-8fc0-b7961c1560ca nodeName:}" failed. No retries permitted until 2025-11-09 14:44:50.286559089 +0000 UTC m=+8.726271042 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/638e753e-4413-439a-8fc0-b7961c1560ca-config-volume") pod "coredns-668d6bf9bc-5hp9k" (UID: "638e753e-4413-439a-8fc0-b7961c1560ca") : object "kube-system"/"coredns" not registered
	Nov 09 14:44:51 test-preload-148043 kubelet[1178]: E1109 14:44:51.804097    1178 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762699491803420449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 09 14:44:51 test-preload-148043 kubelet[1178]: E1109 14:44:51.804125    1178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762699491803420449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 09 14:44:52 test-preload-148043 kubelet[1178]: I1109 14:44:52.947183    1178 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 09 14:44:58 test-preload-148043 kubelet[1178]: I1109 14:44:58.833230    1178 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 09 14:45:01 test-preload-148043 kubelet[1178]: E1109 14:45:01.809492    1178 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762699501807120676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 09 14:45:01 test-preload-148043 kubelet[1178]: E1109 14:45:01.809574    1178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762699501807120676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [b7abe21170136dedb81c900b4e2832ef54c7def7773ce42ed7a0616763e9f0ea] <==
	I1109 14:44:47.567500       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-148043 -n test-preload-148043
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-148043 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-148043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-148043
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-148043: (1.043020755s)
--- FAIL: TestPreload (162.14s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (56.49s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-750355 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-750355 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (51.150201691s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-750355] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-549598/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-549598/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-750355" primary control-plane node in "pause-750355" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-750355" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:53:21.959269  589326 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:53:21.959494  589326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:53:21.959503  589326 out.go:374] Setting ErrFile to fd 2...
	I1109 14:53:21.959509  589326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:53:21.959928  589326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
	I1109 14:53:21.960843  589326 out.go:368] Setting JSON to false
	I1109 14:53:21.962382  589326 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":74151,"bootTime":1762625851,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 14:53:21.962594  589326 start.go:143] virtualization: kvm guest
	I1109 14:53:21.964958  589326 out.go:179] * [pause-750355] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 14:53:21.966606  589326 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:53:21.966678  589326 notify.go:221] Checking for updates...
	I1109 14:53:21.970053  589326 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:53:21.971625  589326 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-549598/kubeconfig
	I1109 14:53:21.973020  589326 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 14:53:21.974588  589326 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 14:53:21.976185  589326 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:53:21.978431  589326 config.go:182] Loaded profile config "pause-750355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:53:21.979487  589326 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:53:22.040863  589326 out.go:179] * Using the kvm2 driver based on existing profile
	I1109 14:53:22.042399  589326 start.go:309] selected driver: kvm2
	I1109 14:53:22.042436  589326 start.go:930] validating driver "kvm2" against &{Name:pause-750355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-750355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.177 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:53:22.042691  589326 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:53:22.044101  589326 cni.go:84] Creating CNI manager for ""
	I1109 14:53:22.044182  589326 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1109 14:53:22.044262  589326 start.go:353] cluster config:
	{Name:pause-750355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-750355 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.177 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:53:22.044465  589326 iso.go:125] acquiring lock: {Name:mk8f0547b4600c0b2d1e831f024251df19b55199 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:53:22.046470  589326 out.go:179] * Starting "pause-750355" primary control-plane node in "pause-750355" cluster
	I1109 14:53:22.047943  589326 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:53:22.048019  589326 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-549598/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1109 14:53:22.048035  589326 cache.go:65] Caching tarball of preloaded images
	I1109 14:53:22.048202  589326 preload.go:238] Found /home/jenkins/minikube-integration/21139-549598/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1109 14:53:22.048224  589326 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:53:22.048491  589326 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/pause-750355/config.json ...
	I1109 14:53:22.048933  589326 start.go:360] acquireMachinesLock for pause-750355: {Name:mkb2a0b4add9a99b18ce9ab72b74eb5b4fda0e0a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1109 14:53:25.510586  589326 start.go:364] duration metric: took 3.461568813s to acquireMachinesLock for "pause-750355"
	I1109 14:53:25.510673  589326 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:53:25.510684  589326 fix.go:54] fixHost starting: 
	I1109 14:53:25.514447  589326 fix.go:112] recreateIfNeeded on pause-750355: state=Running err=<nil>
	W1109 14:53:25.514520  589326 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:53:25.516273  589326 out.go:252] * Updating the running kvm2 "pause-750355" VM ...
	I1109 14:53:25.516316  589326 machine.go:94] provisionDockerMachine start ...
	I1109 14:53:25.522146  589326 main.go:143] libmachine: domain pause-750355 has defined MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:25.522747  589326 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:f7:f7", ip: ""} in network mk-pause-750355: {Iface:virbr3 ExpiryTime:2025-11-09 15:52:11 +0000 UTC Type:0 Mac:52:54:00:55:f7:f7 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:pause-750355 Clientid:01:52:54:00:55:f7:f7}
	I1109 14:53:25.522818  589326 main.go:143] libmachine: domain pause-750355 has defined IP address 192.168.61.177 and MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:25.523082  589326 main.go:143] libmachine: Using SSH client type: native
	I1109 14:53:25.523387  589326 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I1109 14:53:25.523411  589326 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:53:25.668866  589326 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-750355
	
	I1109 14:53:25.668919  589326 buildroot.go:166] provisioning hostname "pause-750355"
	I1109 14:53:25.672889  589326 main.go:143] libmachine: domain pause-750355 has defined MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:25.673847  589326 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:f7:f7", ip: ""} in network mk-pause-750355: {Iface:virbr3 ExpiryTime:2025-11-09 15:52:11 +0000 UTC Type:0 Mac:52:54:00:55:f7:f7 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:pause-750355 Clientid:01:52:54:00:55:f7:f7}
	I1109 14:53:25.673911  589326 main.go:143] libmachine: domain pause-750355 has defined IP address 192.168.61.177 and MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:25.674379  589326 main.go:143] libmachine: Using SSH client type: native
	I1109 14:53:25.674697  589326 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I1109 14:53:25.674713  589326 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-750355 && echo "pause-750355" | sudo tee /etc/hostname
	I1109 14:53:25.839854  589326 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-750355
	
	I1109 14:53:25.844155  589326 main.go:143] libmachine: domain pause-750355 has defined MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:25.844944  589326 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:f7:f7", ip: ""} in network mk-pause-750355: {Iface:virbr3 ExpiryTime:2025-11-09 15:52:11 +0000 UTC Type:0 Mac:52:54:00:55:f7:f7 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:pause-750355 Clientid:01:52:54:00:55:f7:f7}
	I1109 14:53:25.844984  589326 main.go:143] libmachine: domain pause-750355 has defined IP address 192.168.61.177 and MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:25.845262  589326 main.go:143] libmachine: Using SSH client type: native
	I1109 14:53:25.845587  589326 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I1109 14:53:25.845616  589326 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-750355' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-750355/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-750355' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:53:25.991025  589326 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:53:25.991068  589326 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21139-549598/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-549598/.minikube}
	I1109 14:53:25.991119  589326 buildroot.go:174] setting up certificates
	I1109 14:53:25.991135  589326 provision.go:84] configureAuth start
	I1109 14:53:25.996413  589326 main.go:143] libmachine: domain pause-750355 has defined MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:25.997098  589326 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:f7:f7", ip: ""} in network mk-pause-750355: {Iface:virbr3 ExpiryTime:2025-11-09 15:52:11 +0000 UTC Type:0 Mac:52:54:00:55:f7:f7 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:pause-750355 Clientid:01:52:54:00:55:f7:f7}
	I1109 14:53:25.997137  589326 main.go:143] libmachine: domain pause-750355 has defined IP address 192.168.61.177 and MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:26.001574  589326 main.go:143] libmachine: domain pause-750355 has defined MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:26.002214  589326 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:f7:f7", ip: ""} in network mk-pause-750355: {Iface:virbr3 ExpiryTime:2025-11-09 15:52:11 +0000 UTC Type:0 Mac:52:54:00:55:f7:f7 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:pause-750355 Clientid:01:52:54:00:55:f7:f7}
	I1109 14:53:26.002252  589326 main.go:143] libmachine: domain pause-750355 has defined IP address 192.168.61.177 and MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:26.002543  589326 provision.go:143] copyHostCerts
	I1109 14:53:26.002645  589326 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-549598/.minikube/ca.pem, removing ...
	I1109 14:53:26.002674  589326 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-549598/.minikube/ca.pem
	I1109 14:53:26.002765  589326 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-549598/.minikube/ca.pem (1082 bytes)
	I1109 14:53:26.002987  589326 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-549598/.minikube/cert.pem, removing ...
	I1109 14:53:26.003010  589326 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-549598/.minikube/cert.pem
	I1109 14:53:26.003063  589326 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-549598/.minikube/cert.pem (1123 bytes)
	I1109 14:53:26.003169  589326 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-549598/.minikube/key.pem, removing ...
	I1109 14:53:26.003186  589326 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-549598/.minikube/key.pem
	I1109 14:53:26.003234  589326 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-549598/.minikube/key.pem (1679 bytes)
	I1109 14:53:26.003320  589326 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-549598/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca-key.pem org=jenkins.pause-750355 san=[127.0.0.1 192.168.61.177 localhost minikube pause-750355]
	I1109 14:53:26.403496  589326 provision.go:177] copyRemoteCerts
	I1109 14:53:26.403573  589326 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:53:26.407611  589326 main.go:143] libmachine: domain pause-750355 has defined MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:26.408162  589326 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:f7:f7", ip: ""} in network mk-pause-750355: {Iface:virbr3 ExpiryTime:2025-11-09 15:52:11 +0000 UTC Type:0 Mac:52:54:00:55:f7:f7 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:pause-750355 Clientid:01:52:54:00:55:f7:f7}
	I1109 14:53:26.408194  589326 main.go:143] libmachine: domain pause-750355 has defined IP address 192.168.61.177 and MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:26.408384  589326 sshutil.go:53] new ssh client: &{IP:192.168.61.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/pause-750355/id_rsa Username:docker}
	I1109 14:53:26.517423  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:53:26.577006  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1109 14:53:26.650560  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:53:26.718429  589326 provision.go:87] duration metric: took 727.270164ms to configureAuth
	I1109 14:53:26.718473  589326 buildroot.go:189] setting minikube options for container-runtime
	I1109 14:53:26.718872  589326 config.go:182] Loaded profile config "pause-750355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:53:26.723066  589326 main.go:143] libmachine: domain pause-750355 has defined MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:26.723583  589326 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:f7:f7", ip: ""} in network mk-pause-750355: {Iface:virbr3 ExpiryTime:2025-11-09 15:52:11 +0000 UTC Type:0 Mac:52:54:00:55:f7:f7 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:pause-750355 Clientid:01:52:54:00:55:f7:f7}
	I1109 14:53:26.723616  589326 main.go:143] libmachine: domain pause-750355 has defined IP address 192.168.61.177 and MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:26.724219  589326 main.go:143] libmachine: Using SSH client type: native
	I1109 14:53:26.724515  589326 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I1109 14:53:26.724537  589326 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:53:32.514159  589326 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:53:32.514195  589326 machine.go:97] duration metric: took 6.997864748s to provisionDockerMachine
	I1109 14:53:32.514211  589326 start.go:293] postStartSetup for "pause-750355" (driver="kvm2")
	I1109 14:53:32.514241  589326 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:53:32.514343  589326 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:53:32.518330  589326 main.go:143] libmachine: domain pause-750355 has defined MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.519023  589326 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:f7:f7", ip: ""} in network mk-pause-750355: {Iface:virbr3 ExpiryTime:2025-11-09 15:52:11 +0000 UTC Type:0 Mac:52:54:00:55:f7:f7 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:pause-750355 Clientid:01:52:54:00:55:f7:f7}
	I1109 14:53:32.519069  589326 main.go:143] libmachine: domain pause-750355 has defined IP address 192.168.61.177 and MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.519325  589326 sshutil.go:53] new ssh client: &{IP:192.168.61.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/pause-750355/id_rsa Username:docker}
	I1109 14:53:32.617883  589326 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:53:32.624741  589326 info.go:137] Remote host: Buildroot 2025.02
	I1109 14:53:32.624826  589326 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-549598/.minikube/addons for local assets ...
	I1109 14:53:32.624922  589326 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-549598/.minikube/files for local assets ...
	I1109 14:53:32.625068  589326 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem -> 5534732.pem in /etc/ssl/certs
	I1109 14:53:32.625275  589326 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:53:32.646938  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem --> /etc/ssl/certs/5534732.pem (1708 bytes)
	I1109 14:53:32.702728  589326 start.go:296] duration metric: took 188.497538ms for postStartSetup
	I1109 14:53:32.702787  589326 fix.go:56] duration metric: took 7.192104702s for fixHost
	I1109 14:53:32.707025  589326 main.go:143] libmachine: domain pause-750355 has defined MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.707632  589326 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:f7:f7", ip: ""} in network mk-pause-750355: {Iface:virbr3 ExpiryTime:2025-11-09 15:52:11 +0000 UTC Type:0 Mac:52:54:00:55:f7:f7 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:pause-750355 Clientid:01:52:54:00:55:f7:f7}
	I1109 14:53:32.707664  589326 main.go:143] libmachine: domain pause-750355 has defined IP address 192.168.61.177 and MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.707938  589326 main.go:143] libmachine: Using SSH client type: native
	I1109 14:53:32.708236  589326 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I1109 14:53:32.708255  589326 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1109 14:53:32.848258  589326 main.go:143] libmachine: SSH cmd err, output: <nil>: 1762700012.843158872
	
	I1109 14:53:32.848293  589326 fix.go:216] guest clock: 1762700012.843158872
	I1109 14:53:32.848302  589326 fix.go:229] Guest: 2025-11-09 14:53:32.843158872 +0000 UTC Remote: 2025-11-09 14:53:32.702805276 +0000 UTC m=+10.819470767 (delta=140.353596ms)
	I1109 14:53:32.848332  589326 fix.go:200] guest clock delta is within tolerance: 140.353596ms
	I1109 14:53:32.848341  589326 start.go:83] releasing machines lock for "pause-750355", held for 7.33770666s
	I1109 14:53:32.852953  589326 main.go:143] libmachine: domain pause-750355 has defined MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.853612  589326 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:f7:f7", ip: ""} in network mk-pause-750355: {Iface:virbr3 ExpiryTime:2025-11-09 15:52:11 +0000 UTC Type:0 Mac:52:54:00:55:f7:f7 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:pause-750355 Clientid:01:52:54:00:55:f7:f7}
	I1109 14:53:32.853652  589326 main.go:143] libmachine: domain pause-750355 has defined IP address 192.168.61.177 and MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.854578  589326 ssh_runner.go:195] Run: cat /version.json
	I1109 14:53:32.854645  589326 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:53:32.858821  589326 main.go:143] libmachine: domain pause-750355 has defined MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.859048  589326 main.go:143] libmachine: domain pause-750355 has defined MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.859461  589326 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:f7:f7", ip: ""} in network mk-pause-750355: {Iface:virbr3 ExpiryTime:2025-11-09 15:52:11 +0000 UTC Type:0 Mac:52:54:00:55:f7:f7 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:pause-750355 Clientid:01:52:54:00:55:f7:f7}
	I1109 14:53:32.859491  589326 main.go:143] libmachine: domain pause-750355 has defined IP address 192.168.61.177 and MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.859702  589326 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:f7:f7", ip: ""} in network mk-pause-750355: {Iface:virbr3 ExpiryTime:2025-11-09 15:52:11 +0000 UTC Type:0 Mac:52:54:00:55:f7:f7 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:pause-750355 Clientid:01:52:54:00:55:f7:f7}
	I1109 14:53:32.859764  589326 sshutil.go:53] new ssh client: &{IP:192.168.61.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/pause-750355/id_rsa Username:docker}
	I1109 14:53:32.859784  589326 main.go:143] libmachine: domain pause-750355 has defined IP address 192.168.61.177 and MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.860173  589326 sshutil.go:53] new ssh client: &{IP:192.168.61.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/pause-750355/id_rsa Username:docker}
	I1109 14:53:32.982848  589326 ssh_runner.go:195] Run: systemctl --version
	I1109 14:53:32.992834  589326 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:53:33.167243  589326 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:53:33.184281  589326 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:53:33.184428  589326 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:53:33.199770  589326 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:53:33.199835  589326 start.go:496] detecting cgroup driver to use...
	I1109 14:53:33.199924  589326 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:53:33.228861  589326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:53:33.253162  589326 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:53:33.253247  589326 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:53:33.277765  589326 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:53:33.306681  589326 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:53:33.547679  589326 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:53:33.789112  589326 docker.go:234] disabling docker service ...
	I1109 14:53:33.789192  589326 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:53:33.835061  589326 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:53:33.859423  589326 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:53:34.095668  589326 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:53:34.348950  589326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:53:34.370306  589326 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:53:34.406034  589326 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:53:34.406113  589326 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:53:34.424583  589326 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:53:34.424702  589326 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:53:34.444978  589326 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:53:34.503318  589326 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:53:34.536038  589326 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:53:34.557024  589326 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:53:34.579042  589326 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:53:34.611977  589326 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:53:34.643050  589326 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:53:34.662589  589326 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:53:34.679267  589326 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:53:35.016525  589326 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:53:35.469636  589326 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:53:35.469725  589326 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:53:35.484191  589326 start.go:564] Will wait 60s for crictl version
	I1109 14:53:35.484304  589326 ssh_runner.go:195] Run: which crictl
	I1109 14:53:35.498865  589326 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1109 14:53:35.624105  589326 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1109 14:53:35.624234  589326 ssh_runner.go:195] Run: crio --version
	I1109 14:53:35.723482  589326 ssh_runner.go:195] Run: crio --version
	I1109 14:53:35.811809  589326 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1109 14:53:35.818067  589326 main.go:143] libmachine: domain pause-750355 has defined MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:35.818967  589326 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:f7:f7", ip: ""} in network mk-pause-750355: {Iface:virbr3 ExpiryTime:2025-11-09 15:52:11 +0000 UTC Type:0 Mac:52:54:00:55:f7:f7 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:pause-750355 Clientid:01:52:54:00:55:f7:f7}
	I1109 14:53:35.819010  589326 main.go:143] libmachine: domain pause-750355 has defined IP address 192.168.61.177 and MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:35.819301  589326 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1109 14:53:35.831482  589326 kubeadm.go:884] updating cluster {Name:pause-750355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-750355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.177 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:53:35.831723  589326 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:53:35.831834  589326 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:53:35.984320  589326 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:53:35.984356  589326 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:53:35.984428  589326 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:53:36.087631  589326 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:53:36.087665  589326 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:53:36.087676  589326 kubeadm.go:935] updating node { 192.168.61.177 8443 v1.34.1 crio true true} ...
	I1109 14:53:36.087855  589326 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-750355 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.177
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-750355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:53:36.087968  589326 ssh_runner.go:195] Run: crio config
	I1109 14:53:36.214692  589326 cni.go:84] Creating CNI manager for ""
	I1109 14:53:36.214727  589326 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1109 14:53:36.214752  589326 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:53:36.214790  589326 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.177 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-750355 NodeName:pause-750355 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.177"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.177 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:53:36.215030  589326 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.177
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-750355"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.177"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.177"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:53:36.215131  589326 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:53:36.252652  589326 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:53:36.252755  589326 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:53:36.278942  589326 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1109 14:53:36.330921  589326 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:53:36.362721  589326 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1109 14:53:36.399186  589326 ssh_runner.go:195] Run: grep 192.168.61.177	control-plane.minikube.internal$ /etc/hosts
	I1109 14:53:36.407571  589326 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:53:36.697284  589326 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:53:36.743739  589326 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/pause-750355 for IP: 192.168.61.177
	I1109 14:53:36.743768  589326 certs.go:195] generating shared ca certs ...
	I1109 14:53:36.743788  589326 certs.go:227] acquiring lock for ca certs: {Name:mkc766226c1ec8ac0cc61519ae61374bb0aa3b29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:53:36.744005  589326 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key
	I1109 14:53:36.744085  589326 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key
	I1109 14:53:36.744113  589326 certs.go:257] generating profile certs ...
	I1109 14:53:36.744239  589326 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/pause-750355/client.key
	I1109 14:53:36.744328  589326 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/pause-750355/apiserver.key.0e71cea4
	I1109 14:53:36.744407  589326 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/pause-750355/proxy-client.key
	I1109 14:53:36.744547  589326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/553473.pem (1338 bytes)
	W1109 14:53:36.744588  589326 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-549598/.minikube/certs/553473_empty.pem, impossibly tiny 0 bytes
	I1109 14:53:36.744605  589326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca-key.pem (1675 bytes)
	I1109 14:53:36.744638  589326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:53:36.744667  589326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:53:36.744701  589326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/key.pem (1679 bytes)
	I1109 14:53:36.744757  589326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem (1708 bytes)
	I1109 14:53:36.745717  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:53:36.802898  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:53:36.930012  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:53:37.038562  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:53:37.132415  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/pause-750355/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1109 14:53:37.230643  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/pause-750355/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:53:37.344299  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/pause-750355/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:53:37.428591  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/pause-750355/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:53:37.508289  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem --> /usr/share/ca-certificates/5534732.pem (1708 bytes)
	I1109 14:53:37.601954  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:53:37.699927  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/certs/553473.pem --> /usr/share/ca-certificates/553473.pem (1338 bytes)
	I1109 14:53:37.790527  589326 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:53:37.849189  589326 ssh_runner.go:195] Run: openssl version
	I1109 14:53:37.863966  589326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:53:37.899063  589326 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:53:37.909975  589326 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:53:37.910059  589326 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:53:37.925755  589326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:53:37.951822  589326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/553473.pem && ln -fs /usr/share/ca-certificates/553473.pem /etc/ssl/certs/553473.pem"
	I1109 14:53:37.986396  589326 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/553473.pem
	I1109 14:53:38.002761  589326 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:42 /usr/share/ca-certificates/553473.pem
	I1109 14:53:38.002885  589326 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/553473.pem
	I1109 14:53:38.018873  589326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/553473.pem /etc/ssl/certs/51391683.0"
	I1109 14:53:38.053748  589326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5534732.pem && ln -fs /usr/share/ca-certificates/5534732.pem /etc/ssl/certs/5534732.pem"
	I1109 14:53:38.080199  589326 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5534732.pem
	I1109 14:53:38.096454  589326 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:42 /usr/share/ca-certificates/5534732.pem
	I1109 14:53:38.096542  589326 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5534732.pem
	I1109 14:53:38.113575  589326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5534732.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:53:38.139320  589326 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:53:38.148439  589326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:53:38.162546  589326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:53:38.180182  589326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:53:38.194655  589326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:53:38.209689  589326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:53:38.225878  589326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:53:38.242224  589326 kubeadm.go:401] StartCluster: {Name:pause-750355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-750355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.177 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:53:38.242380  589326 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:53:38.242485  589326 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:53:38.336888  589326 cri.go:89] found id: "45947c6dce347ae279e4132072fb89c1b0522e1b326178274424222c2588321f"
	I1109 14:53:38.336936  589326 cri.go:89] found id: "75289d4097d96308d69b2aaecdc6ec6132f28173770c61a0750ebca515cf1c7e"
	I1109 14:53:38.336944  589326 cri.go:89] found id: "14fc4c3df613902789c27b68b1a5733c47ba7f7489099ab5c477b0483663c4aa"
	I1109 14:53:38.336950  589326 cri.go:89] found id: "59e2c2e4d7754e6a89e73e34e1fff37173c66fb5e41dec40edd897afc30be428"
	I1109 14:53:38.336953  589326 cri.go:89] found id: "21775c560e54e26138ce07b6c06d3e22037a109f355e39c9929ed04ace19914a"
	I1109 14:53:38.336959  589326 cri.go:89] found id: "604c10edad5bca905fe997db6a580b99ebde28984c8a549a484170114ee3ddba"
	I1109 14:53:38.336965  589326 cri.go:89] found id: "78781e9a162ec886cd6c744eaa944c53245caabf93ca6dceadcffcbe3c2ebd45"
	I1109 14:53:38.336970  589326 cri.go:89] found id: "76fa8220ee04417468321c6b207132c59ea2361bc897036d67d13cd60c74934d"
	I1109 14:53:38.336976  589326 cri.go:89] found id: "58fe997c2cbccb9c9742a4120bf476b3f9d5a51772918e6de87c43fbeb1fb8fa"
	I1109 14:53:38.336988  589326 cri.go:89] found id: ""
	I1109 14:53:38.337060  589326 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-750355 -n pause-750355
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-750355 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-750355 logs -n 25: (2.006088509s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                          ARGS                                                                                                           │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ stopped-upgrade-667086 stop                                                                                                                                                                                             │ stopped-upgrade-667086    │ jenkins │ v1.32.0 │ 09 Nov 25 14:50 UTC │ 09 Nov 25 14:50 UTC │
	│ start   │ -p stopped-upgrade-667086 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                  │ stopped-upgrade-667086    │ jenkins │ v1.37.0 │ 09 Nov 25 14:50 UTC │ 09 Nov 25 14:51 UTC │
	│ stop    │ -p NoKubernetes-748314                                                                                                                                                                                                  │ NoKubernetes-748314       │ jenkins │ v1.37.0 │ 09 Nov 25 14:50 UTC │ 09 Nov 25 14:50 UTC │
	│ start   │ -p NoKubernetes-748314 --driver=kvm2  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-748314       │ jenkins │ v1.37.0 │ 09 Nov 25 14:50 UTC │ 09 Nov 25 14:51 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-353436 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ running-upgrade-353436    │ jenkins │ v1.37.0 │ 09 Nov 25 14:50 UTC │                     │
	│ delete  │ -p running-upgrade-353436                                                                                                                                                                                               │ running-upgrade-353436    │ jenkins │ v1.37.0 │ 09 Nov 25 14:50 UTC │ 09 Nov 25 14:50 UTC │
	│ start   │ -p cert-expiration-729640 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                    │ cert-expiration-729640    │ jenkins │ v1.37.0 │ 09 Nov 25 14:50 UTC │ 09 Nov 25 14:52 UTC │
	│ start   │ -p kubernetes-upgrade-699004 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                                                                                         │ kubernetes-upgrade-699004 │ jenkins │ v1.37.0 │ 09 Nov 25 14:51 UTC │                     │
	│ start   │ -p kubernetes-upgrade-699004 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                  │ kubernetes-upgrade-699004 │ jenkins │ v1.37.0 │ 09 Nov 25 14:51 UTC │ 09 Nov 25 14:52 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-667086 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ stopped-upgrade-667086    │ jenkins │ v1.37.0 │ 09 Nov 25 14:51 UTC │                     │
	│ delete  │ -p stopped-upgrade-667086                                                                                                                                                                                               │ stopped-upgrade-667086    │ jenkins │ v1.37.0 │ 09 Nov 25 14:51 UTC │ 09 Nov 25 14:51 UTC │
	│ ssh     │ -p NoKubernetes-748314 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-748314       │ jenkins │ v1.37.0 │ 09 Nov 25 14:51 UTC │                     │
	│ delete  │ -p NoKubernetes-748314                                                                                                                                                                                                  │ NoKubernetes-748314       │ jenkins │ v1.37.0 │ 09 Nov 25 14:51 UTC │ 09 Nov 25 14:51 UTC │
	│ start   │ -p pause-750355 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                 │ pause-750355              │ jenkins │ v1.37.0 │ 09 Nov 25 14:51 UTC │ 09 Nov 25 14:53 UTC │
	│ start   │ -p force-systemd-flag-936534 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                               │ force-systemd-flag-936534 │ jenkins │ v1.37.0 │ 09 Nov 25 14:51 UTC │ 09 Nov 25 14:53 UTC │
	│ delete  │ -p kubernetes-upgrade-699004                                                                                                                                                                                            │ kubernetes-upgrade-699004 │ jenkins │ v1.37.0 │ 09 Nov 25 14:52 UTC │ 09 Nov 25 14:52 UTC │
	│ start   │ -p cert-options-868897 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio │ cert-options-868897       │ jenkins │ v1.37.0 │ 09 Nov 25 14:52 UTC │ 09 Nov 25 14:53 UTC │
	│ ssh     │ force-systemd-flag-936534 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                    │ force-systemd-flag-936534 │ jenkins │ v1.37.0 │ 09 Nov 25 14:53 UTC │ 09 Nov 25 14:53 UTC │
	│ delete  │ -p force-systemd-flag-936534                                                                                                                                                                                            │ force-systemd-flag-936534 │ jenkins │ v1.37.0 │ 09 Nov 25 14:53 UTC │ 09 Nov 25 14:53 UTC │
	│ start   │ -p auto-877855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                                                                                   │ auto-877855               │ jenkins │ v1.37.0 │ 09 Nov 25 14:53 UTC │                     │
	│ start   │ -p pause-750355 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                          │ pause-750355              │ jenkins │ v1.37.0 │ 09 Nov 25 14:53 UTC │ 09 Nov 25 14:54 UTC │
	│ ssh     │ cert-options-868897 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                             │ cert-options-868897       │ jenkins │ v1.37.0 │ 09 Nov 25 14:53 UTC │ 09 Nov 25 14:53 UTC │
	│ ssh     │ -p cert-options-868897 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                           │ cert-options-868897       │ jenkins │ v1.37.0 │ 09 Nov 25 14:53 UTC │ 09 Nov 25 14:53 UTC │
	│ delete  │ -p cert-options-868897                                                                                                                                                                                                  │ cert-options-868897       │ jenkins │ v1.37.0 │ 09 Nov 25 14:53 UTC │ 09 Nov 25 14:53 UTC │
	│ start   │ -p kindnet-877855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                                                                                  │ kindnet-877855            │ jenkins │ v1.37.0 │ 09 Nov 25 14:53 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:53:30
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:53:30.734172  589495 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:53:30.734586  589495 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:53:30.734607  589495 out.go:374] Setting ErrFile to fd 2...
	I1109 14:53:30.734615  589495 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:53:30.735083  589495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
	I1109 14:53:30.736151  589495 out.go:368] Setting JSON to false
	I1109 14:53:30.737540  589495 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":74160,"bootTime":1762625851,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 14:53:30.737734  589495 start.go:143] virtualization: kvm guest
	I1109 14:53:30.740086  589495 out.go:179] * [kindnet-877855] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 14:53:30.741759  589495 notify.go:221] Checking for updates...
	I1109 14:53:30.741777  589495 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:53:30.744697  589495 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:53:30.746457  589495 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-549598/kubeconfig
	I1109 14:53:30.747999  589495 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 14:53:30.749444  589495 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 14:53:30.751001  589495 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:53:30.752958  589495 config.go:182] Loaded profile config "auto-877855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:53:30.753111  589495 config.go:182] Loaded profile config "cert-expiration-729640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:53:30.753217  589495 config.go:182] Loaded profile config "guest-746433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1109 14:53:30.753410  589495 config.go:182] Loaded profile config "pause-750355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:53:30.753551  589495 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:53:30.799585  589495 out.go:179] * Using the kvm2 driver based on user configuration
	I1109 14:53:30.801521  589495 start.go:309] selected driver: kvm2
	I1109 14:53:30.801558  589495 start.go:930] validating driver "kvm2" against <nil>
	I1109 14:53:30.801574  589495 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:53:30.802709  589495 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 14:53:30.803061  589495 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:53:30.803102  589495 cni.go:84] Creating CNI manager for "kindnet"
	I1109 14:53:30.803108  589495 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 14:53:30.803158  589495 start.go:353] cluster config:
	{Name:kindnet-877855 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-877855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:53:30.803283  589495 iso.go:125] acquiring lock: {Name:mk8f0547b4600c0b2d1e831f024251df19b55199 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:53:30.806238  589495 out.go:179] * Starting "kindnet-877855" primary control-plane node in "kindnet-877855" cluster
	I1109 14:53:30.807754  589495 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:53:30.807875  589495 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-549598/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1109 14:53:30.807896  589495 cache.go:65] Caching tarball of preloaded images
	I1109 14:53:30.808055  589495 preload.go:238] Found /home/jenkins/minikube-integration/21139-549598/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1109 14:53:30.808072  589495 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:53:30.808194  589495 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/kindnet-877855/config.json ...
	I1109 14:53:30.808217  589495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/kindnet-877855/config.json: {Name:mke9fcd22a404f8037183a69c2c1c8c63d826560 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:53:30.808393  589495 start.go:360] acquireMachinesLock for kindnet-877855: {Name:mkb2a0b4add9a99b18ce9ab72b74eb5b4fda0e0a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1109 14:53:32.848495  589495 start.go:364] duration metric: took 2.040051873s to acquireMachinesLock for "kindnet-877855"
	I1109 14:53:32.848634  589495 start.go:93] Provisioning new machine with config: &{Name:kindnet-877855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.34.1 ClusterName:kindnet-877855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:53:32.848849  589495 start.go:125] createHost starting for "" (driver="kvm2")
	I1109 14:53:30.266813  589106 crio.go:462] duration metric: took 2.267401499s to copy over tarball
	I1109 14:53:30.266985  589106 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1109 14:53:32.335590  589106 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.068540919s)
	I1109 14:53:32.335618  589106 crio.go:469] duration metric: took 2.068753812s to extract the tarball
	I1109 14:53:32.335627  589106 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1109 14:53:32.381849  589106 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:53:32.438314  589106 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:53:32.438345  589106 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:53:32.438363  589106 kubeadm.go:935] updating node { 192.168.50.12 8443 v1.34.1 crio true true} ...
	I1109 14:53:32.438502  589106 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-877855 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-877855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:53:32.438598  589106 ssh_runner.go:195] Run: crio config
	I1109 14:53:32.504630  589106 cni.go:84] Creating CNI manager for ""
	I1109 14:53:32.504679  589106 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1109 14:53:32.504712  589106 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:53:32.504752  589106 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.12 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-877855 NodeName:auto-877855 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:53:32.505030  589106 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-877855"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.12"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.12"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:53:32.505118  589106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:53:32.522628  589106 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:53:32.522728  589106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:53:32.539299  589106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I1109 14:53:32.567459  589106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:53:32.594058  589106 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1109 14:53:32.628113  589106 ssh_runner.go:195] Run: grep 192.168.50.12	control-plane.minikube.internal$ /etc/hosts
	I1109 14:53:32.636674  589106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.12	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:53:32.660195  589106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:53:32.878314  589106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:53:32.923282  589106 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855 for IP: 192.168.50.12
	I1109 14:53:32.923314  589106 certs.go:195] generating shared ca certs ...
	I1109 14:53:32.923332  589106 certs.go:227] acquiring lock for ca certs: {Name:mkc766226c1ec8ac0cc61519ae61374bb0aa3b29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:53:32.923505  589106 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key
	I1109 14:53:32.923564  589106 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key
	I1109 14:53:32.923578  589106 certs.go:257] generating profile certs ...
	I1109 14:53:32.923639  589106 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.key
	I1109 14:53:32.923654  589106 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.crt with IP's: []
	I1109 14:53:33.282206  589106 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.crt ...
	I1109 14:53:33.282244  589106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.crt: {Name:mke4eebdd3814f81479beef090b4209b5daba63e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:53:33.282522  589106 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.key ...
	I1109 14:53:33.282548  589106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.key: {Name:mkabf1e0059fcdf293a9fa843cad66ef44313960 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:53:33.282701  589106 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/apiserver.key.f9b0f22e
	I1109 14:53:33.282727  589106 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/apiserver.crt.f9b0f22e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.12]
	I1109 14:53:33.515763  589106 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/apiserver.crt.f9b0f22e ...
	I1109 14:53:33.515815  589106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/apiserver.crt.f9b0f22e: {Name:mka83b73468a9c582a3825decb393050b76eaa0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:53:33.516064  589106 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/apiserver.key.f9b0f22e ...
	I1109 14:53:33.516092  589106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/apiserver.key.f9b0f22e: {Name:mkde324edebe62223e037a2e23b064e1b0be827f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:53:33.516247  589106 certs.go:382] copying /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/apiserver.crt.f9b0f22e -> /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/apiserver.crt
	I1109 14:53:33.516352  589106 certs.go:386] copying /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/apiserver.key.f9b0f22e -> /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/apiserver.key
	I1109 14:53:33.516412  589106 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/proxy-client.key
	I1109 14:53:33.516428  589106 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/proxy-client.crt with IP's: []
	I1109 14:53:34.151516  589106 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/proxy-client.crt ...
	I1109 14:53:34.151551  589106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/proxy-client.crt: {Name:mk6a42b02ba3c7f94b379f0ac8ae2dea74b157c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:53:34.151752  589106 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/proxy-client.key ...
	I1109 14:53:34.151765  589106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/proxy-client.key: {Name:mk1b9cd950e1f386d59f8be4c220f7646488c142 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:53:34.151998  589106 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/553473.pem (1338 bytes)
	W1109 14:53:34.152037  589106 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-549598/.minikube/certs/553473_empty.pem, impossibly tiny 0 bytes
	I1109 14:53:34.152048  589106 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca-key.pem (1675 bytes)
	I1109 14:53:34.152069  589106 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:53:34.152095  589106 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:53:34.152118  589106 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/key.pem (1679 bytes)
	I1109 14:53:34.152156  589106 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem (1708 bytes)
	I1109 14:53:34.152729  589106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:53:34.196758  589106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:53:34.247466  589106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:53:34.299706  589106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:53:34.346757  589106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1109 14:53:34.394674  589106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:53:34.462187  589106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:53:34.513525  589106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:53:34.560963  589106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem --> /usr/share/ca-certificates/5534732.pem (1708 bytes)
	I1109 14:53:34.601748  589106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:53:34.645300  589106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/certs/553473.pem --> /usr/share/ca-certificates/553473.pem (1338 bytes)
	I1109 14:53:34.696010  589106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:53:34.723690  589106 ssh_runner.go:195] Run: openssl version
	I1109 14:53:34.735826  589106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:53:34.755212  589106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:53:34.762597  589106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:53:34.762721  589106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:53:34.772251  589106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:53:34.800931  589106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/553473.pem && ln -fs /usr/share/ca-certificates/553473.pem /etc/ssl/certs/553473.pem"
	I1109 14:53:34.822358  589106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/553473.pem
	I1109 14:53:34.830631  589106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:42 /usr/share/ca-certificates/553473.pem
	I1109 14:53:34.830723  589106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/553473.pem
	I1109 14:53:34.842969  589106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/553473.pem /etc/ssl/certs/51391683.0"
	I1109 14:53:34.864132  589106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5534732.pem && ln -fs /usr/share/ca-certificates/5534732.pem /etc/ssl/certs/5534732.pem"
	I1109 14:53:34.887233  589106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5534732.pem
	I1109 14:53:34.897921  589106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:42 /usr/share/ca-certificates/5534732.pem
	I1109 14:53:34.898026  589106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5534732.pem
	I1109 14:53:34.912186  589106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5534732.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:53:34.930933  589106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:53:34.937642  589106 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 14:53:34.937728  589106 kubeadm.go:401] StartCluster: {Name:auto-877855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Clu
sterName:auto-877855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.12 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:53:34.937860  589106 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:53:34.937934  589106 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:53:34.995418  589106 cri.go:89] found id: ""
	I1109 14:53:34.995530  589106 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:53:35.017298  589106 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 14:53:35.034774  589106 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 14:53:35.051931  589106 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 14:53:35.051972  589106 kubeadm.go:158] found existing configuration files:
	
	I1109 14:53:35.052074  589106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 14:53:35.068247  589106 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 14:53:35.068330  589106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 14:53:35.085254  589106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 14:53:35.100809  589106 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 14:53:35.100899  589106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 14:53:35.136727  589106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 14:53:35.159172  589106 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 14:53:35.159259  589106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 14:53:35.182180  589106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 14:53:35.202324  589106 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 14:53:35.202417  589106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 14:53:35.221428  589106 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1109 14:53:35.295472  589106 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1109 14:53:35.295601  589106 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 14:53:35.440031  589106 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 14:53:35.440202  589106 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 14:53:35.440387  589106 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1109 14:53:35.469371  589106 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 14:53:32.953892  589495 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1109 14:53:32.954287  589495 start.go:159] libmachine.API.Create for "kindnet-877855" (driver="kvm2")
	I1109 14:53:32.954345  589495 client.go:173] LocalClient.Create starting
	I1109 14:53:32.954486  589495 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem
	I1109 14:53:32.954550  589495 main.go:143] libmachine: Decoding PEM data...
	I1109 14:53:32.954576  589495 main.go:143] libmachine: Parsing certificate...
	I1109 14:53:32.954675  589495 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem
	I1109 14:53:32.954718  589495 main.go:143] libmachine: Decoding PEM data...
	I1109 14:53:32.954738  589495 main.go:143] libmachine: Parsing certificate...
	I1109 14:53:32.973379  589495 main.go:143] libmachine: creating domain...
	I1109 14:53:32.973400  589495 main.go:143] libmachine: creating network...
	I1109 14:53:32.975391  589495 main.go:143] libmachine: found existing default network
	I1109 14:53:32.975692  589495 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1109 14:53:32.976817  589495 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e1:59:f3} reservation:<nil>}
	I1109 14:53:32.978062  589495 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:47:13:27} reservation:<nil>}
	I1109 14:53:32.979035  589495 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:18:1c:85} reservation:<nil>}
	I1109 14:53:32.980198  589495 network.go:211] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:38:1c:49} reservation:<nil>}
	I1109 14:53:32.981251  589495 network.go:211] skipping subnet 192.168.83.0/24 that is taken: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName:virbr5 IfaceIPv4:192.168.83.1 IfaceMTU:1500 IfaceMAC:52:54:00:b2:5c:8f} reservation:<nil>}
	I1109 14:53:32.982826  589495 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d68550}
	I1109 14:53:32.982969  589495 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-kindnet-877855</name>
	  <dns enable='no'/>
	  <ip address='192.168.94.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.94.2' end='192.168.94.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1109 14:53:33.162660  589495 main.go:143] libmachine: creating private network mk-kindnet-877855 192.168.94.0/24...
	I1109 14:53:33.280165  589495 main.go:143] libmachine: private network mk-kindnet-877855 192.168.94.0/24 created
	I1109 14:53:33.280525  589495 main.go:143] libmachine: <network>
	  <name>mk-kindnet-877855</name>
	  <uuid>0bd0bc54-3235-4ba3-a183-c5a9b6be600b</uuid>
	  <bridge name='virbr6' stp='on' delay='0'/>
	  <mac address='52:54:00:99:60:b3'/>
	  <dns enable='no'/>
	  <ip address='192.168.94.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.94.2' end='192.168.94.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1109 14:53:33.280567  589495 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21139-549598/.minikube/machines/kindnet-877855 ...
	I1109 14:53:33.280596  589495 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21139-549598/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso
	I1109 14:53:33.280608  589495 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 14:53:33.280706  589495 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21139-549598/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21139-549598/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso...
	I1109 14:53:33.597765  589495 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21139-549598/.minikube/machines/kindnet-877855/id_rsa...
	I1109 14:53:33.650876  589495 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21139-549598/.minikube/machines/kindnet-877855/kindnet-877855.rawdisk...
	I1109 14:53:33.650935  589495 main.go:143] libmachine: Writing magic tar header
	I1109 14:53:33.650975  589495 main.go:143] libmachine: Writing SSH key tar header
	I1109 14:53:33.651096  589495 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21139-549598/.minikube/machines/kindnet-877855 ...
	I1109 14:53:33.651209  589495 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21139-549598/.minikube/machines/kindnet-877855
	I1109 14:53:33.651252  589495 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21139-549598/.minikube/machines/kindnet-877855 (perms=drwx------)
	I1109 14:53:33.651271  589495 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21139-549598/.minikube/machines
	I1109 14:53:33.651292  589495 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21139-549598/.minikube/machines (perms=drwxr-xr-x)
	I1109 14:53:33.651309  589495 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 14:53:33.651325  589495 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21139-549598/.minikube (perms=drwxr-xr-x)
	I1109 14:53:33.651344  589495 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21139-549598
	I1109 14:53:33.651359  589495 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21139-549598 (perms=drwxrwxr-x)
	I1109 14:53:33.651377  589495 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1109 14:53:33.651393  589495 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1109 14:53:33.651409  589495 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1109 14:53:33.651430  589495 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1109 14:53:33.651449  589495 main.go:143] libmachine: checking permissions on dir: /home
	I1109 14:53:33.651494  589495 main.go:143] libmachine: skipping /home - not owner
	I1109 14:53:33.651510  589495 main.go:143] libmachine: defining domain...
	I1109 14:53:33.653585  589495 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>kindnet-877855</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21139-549598/.minikube/machines/kindnet-877855/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21139-549598/.minikube/machines/kindnet-877855/kindnet-877855.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-kindnet-877855'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1109 14:53:33.775888  589495 main.go:143] libmachine: domain kindnet-877855 has defined MAC address 52:54:00:12:a0:39 in network default
	I1109 14:53:33.777154  589495 main.go:143] libmachine: domain kindnet-877855 has defined MAC address 52:54:00:1c:25:d5 in network mk-kindnet-877855
	I1109 14:53:33.777196  589495 main.go:143] libmachine: starting domain...
	I1109 14:53:33.777207  589495 main.go:143] libmachine: ensuring networks are active...
	I1109 14:53:33.778984  589495 main.go:143] libmachine: Ensuring network default is active
	I1109 14:53:33.780335  589495 main.go:143] libmachine: Ensuring network mk-kindnet-877855 is active
	I1109 14:53:33.781486  589495 main.go:143] libmachine: getting domain XML...
	I1109 14:53:33.783137  589495 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>kindnet-877855</name>
	  <uuid>8165d45a-d497-4c2c-8c31-72087adf09aa</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21139-549598/.minikube/machines/kindnet-877855/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21139-549598/.minikube/machines/kindnet-877855/kindnet-877855.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:1c:25:d5'/>
	      <source network='mk-kindnet-877855'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:12:a0:39'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1109 14:53:32.514159  589326 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:53:32.514195  589326 machine.go:97] duration metric: took 6.997864748s to provisionDockerMachine
	I1109 14:53:32.514211  589326 start.go:293] postStartSetup for "pause-750355" (driver="kvm2")
	I1109 14:53:32.514241  589326 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:53:32.514343  589326 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:53:32.518330  589326 main.go:143] libmachine: domain pause-750355 has defined MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.519023  589326 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:f7:f7", ip: ""} in network mk-pause-750355: {Iface:virbr3 ExpiryTime:2025-11-09 15:52:11 +0000 UTC Type:0 Mac:52:54:00:55:f7:f7 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:pause-750355 Clientid:01:52:54:00:55:f7:f7}
	I1109 14:53:32.519069  589326 main.go:143] libmachine: domain pause-750355 has defined IP address 192.168.61.177 and MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.519325  589326 sshutil.go:53] new ssh client: &{IP:192.168.61.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/pause-750355/id_rsa Username:docker}
	I1109 14:53:32.617883  589326 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:53:32.624741  589326 info.go:137] Remote host: Buildroot 2025.02
	I1109 14:53:32.624826  589326 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-549598/.minikube/addons for local assets ...
	I1109 14:53:32.624922  589326 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-549598/.minikube/files for local assets ...
	I1109 14:53:32.625068  589326 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem -> 5534732.pem in /etc/ssl/certs
	I1109 14:53:32.625275  589326 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:53:32.646938  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem --> /etc/ssl/certs/5534732.pem (1708 bytes)
	I1109 14:53:32.702728  589326 start.go:296] duration metric: took 188.497538ms for postStartSetup
	I1109 14:53:32.702787  589326 fix.go:56] duration metric: took 7.192104702s for fixHost
	I1109 14:53:32.707025  589326 main.go:143] libmachine: domain pause-750355 has defined MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.707632  589326 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:f7:f7", ip: ""} in network mk-pause-750355: {Iface:virbr3 ExpiryTime:2025-11-09 15:52:11 +0000 UTC Type:0 Mac:52:54:00:55:f7:f7 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:pause-750355 Clientid:01:52:54:00:55:f7:f7}
	I1109 14:53:32.707664  589326 main.go:143] libmachine: domain pause-750355 has defined IP address 192.168.61.177 and MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.707938  589326 main.go:143] libmachine: Using SSH client type: native
	I1109 14:53:32.708236  589326 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I1109 14:53:32.708255  589326 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1109 14:53:32.848258  589326 main.go:143] libmachine: SSH cmd err, output: <nil>: 1762700012.843158872
	
	I1109 14:53:32.848293  589326 fix.go:216] guest clock: 1762700012.843158872
	I1109 14:53:32.848302  589326 fix.go:229] Guest: 2025-11-09 14:53:32.843158872 +0000 UTC Remote: 2025-11-09 14:53:32.702805276 +0000 UTC m=+10.819470767 (delta=140.353596ms)
	I1109 14:53:32.848332  589326 fix.go:200] guest clock delta is within tolerance: 140.353596ms
	I1109 14:53:32.848341  589326 start.go:83] releasing machines lock for "pause-750355", held for 7.33770666s
	I1109 14:53:32.852953  589326 main.go:143] libmachine: domain pause-750355 has defined MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.853612  589326 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:f7:f7", ip: ""} in network mk-pause-750355: {Iface:virbr3 ExpiryTime:2025-11-09 15:52:11 +0000 UTC Type:0 Mac:52:54:00:55:f7:f7 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:pause-750355 Clientid:01:52:54:00:55:f7:f7}
	I1109 14:53:32.853652  589326 main.go:143] libmachine: domain pause-750355 has defined IP address 192.168.61.177 and MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.854578  589326 ssh_runner.go:195] Run: cat /version.json
	I1109 14:53:32.854645  589326 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:53:32.858821  589326 main.go:143] libmachine: domain pause-750355 has defined MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.859048  589326 main.go:143] libmachine: domain pause-750355 has defined MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.859461  589326 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:f7:f7", ip: ""} in network mk-pause-750355: {Iface:virbr3 ExpiryTime:2025-11-09 15:52:11 +0000 UTC Type:0 Mac:52:54:00:55:f7:f7 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:pause-750355 Clientid:01:52:54:00:55:f7:f7}
	I1109 14:53:32.859491  589326 main.go:143] libmachine: domain pause-750355 has defined IP address 192.168.61.177 and MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.859702  589326 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:f7:f7", ip: ""} in network mk-pause-750355: {Iface:virbr3 ExpiryTime:2025-11-09 15:52:11 +0000 UTC Type:0 Mac:52:54:00:55:f7:f7 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:pause-750355 Clientid:01:52:54:00:55:f7:f7}
	I1109 14:53:32.859764  589326 sshutil.go:53] new ssh client: &{IP:192.168.61.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/pause-750355/id_rsa Username:docker}
	I1109 14:53:32.859784  589326 main.go:143] libmachine: domain pause-750355 has defined IP address 192.168.61.177 and MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.860173  589326 sshutil.go:53] new ssh client: &{IP:192.168.61.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/pause-750355/id_rsa Username:docker}
	I1109 14:53:32.982848  589326 ssh_runner.go:195] Run: systemctl --version
	I1109 14:53:32.992834  589326 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:53:33.167243  589326 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:53:33.184281  589326 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:53:33.184428  589326 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:53:33.199770  589326 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:53:33.199835  589326 start.go:496] detecting cgroup driver to use...
	I1109 14:53:33.199924  589326 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:53:33.228861  589326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:53:33.253162  589326 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:53:33.253247  589326 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:53:33.277765  589326 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:53:33.306681  589326 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:53:33.547679  589326 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:53:33.789112  589326 docker.go:234] disabling docker service ...
	I1109 14:53:33.789192  589326 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:53:33.835061  589326 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:53:33.859423  589326 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:53:34.095668  589326 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:53:34.348950  589326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:53:34.370306  589326 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:53:34.406034  589326 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:53:34.406113  589326 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:53:34.424583  589326 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:53:34.424702  589326 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:53:34.444978  589326 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:53:34.503318  589326 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:53:34.536038  589326 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:53:34.557024  589326 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:53:34.579042  589326 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:53:34.611977  589326 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:53:34.643050  589326 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:53:34.662589  589326 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:53:34.679267  589326 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:53:35.016525  589326 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:53:35.469636  589326 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:53:35.469725  589326 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:53:35.484191  589326 start.go:564] Will wait 60s for crictl version
	I1109 14:53:35.484304  589326 ssh_runner.go:195] Run: which crictl
	I1109 14:53:35.498865  589326 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1109 14:53:35.624105  589326 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1109 14:53:35.624234  589326 ssh_runner.go:195] Run: crio --version
	I1109 14:53:35.723482  589326 ssh_runner.go:195] Run: crio --version
	I1109 14:53:35.811809  589326 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1109 14:53:35.818067  589326 main.go:143] libmachine: domain pause-750355 has defined MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:35.818967  589326 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:f7:f7", ip: ""} in network mk-pause-750355: {Iface:virbr3 ExpiryTime:2025-11-09 15:52:11 +0000 UTC Type:0 Mac:52:54:00:55:f7:f7 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:pause-750355 Clientid:01:52:54:00:55:f7:f7}
	I1109 14:53:35.819010  589326 main.go:143] libmachine: domain pause-750355 has defined IP address 192.168.61.177 and MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:35.819301  589326 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1109 14:53:35.831482  589326 kubeadm.go:884] updating cluster {Name:pause-750355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-750355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.177 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:53:35.831723  589326 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:53:35.831834  589326 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:53:35.984320  589326 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:53:35.984356  589326 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:53:35.984428  589326 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:53:36.087631  589326 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:53:36.087665  589326 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:53:36.087676  589326 kubeadm.go:935] updating node { 192.168.61.177 8443 v1.34.1 crio true true} ...
	I1109 14:53:36.087855  589326 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-750355 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.177
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-750355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:53:36.087968  589326 ssh_runner.go:195] Run: crio config
	I1109 14:53:36.214692  589326 cni.go:84] Creating CNI manager for ""
	I1109 14:53:36.214727  589326 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1109 14:53:36.214752  589326 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:53:36.214790  589326 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.177 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-750355 NodeName:pause-750355 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.177"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.177 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:53:36.215030  589326 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.177
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-750355"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.177"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.177"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:53:36.215131  589326 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:53:36.252652  589326 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:53:36.252755  589326 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:53:36.278942  589326 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1109 14:53:36.330921  589326 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:53:36.362721  589326 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1109 14:53:36.399186  589326 ssh_runner.go:195] Run: grep 192.168.61.177	control-plane.minikube.internal$ /etc/hosts
	I1109 14:53:36.407571  589326 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:53:36.697284  589326 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:53:36.743739  589326 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/pause-750355 for IP: 192.168.61.177
	I1109 14:53:36.743768  589326 certs.go:195] generating shared ca certs ...
	I1109 14:53:36.743788  589326 certs.go:227] acquiring lock for ca certs: {Name:mkc766226c1ec8ac0cc61519ae61374bb0aa3b29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:53:36.744005  589326 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key
	I1109 14:53:36.744085  589326 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key
	I1109 14:53:36.744113  589326 certs.go:257] generating profile certs ...
	I1109 14:53:36.744239  589326 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/pause-750355/client.key
	I1109 14:53:36.744328  589326 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/pause-750355/apiserver.key.0e71cea4
	I1109 14:53:36.744407  589326 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/pause-750355/proxy-client.key
	I1109 14:53:36.744547  589326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/553473.pem (1338 bytes)
	W1109 14:53:36.744588  589326 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-549598/.minikube/certs/553473_empty.pem, impossibly tiny 0 bytes
	I1109 14:53:36.744605  589326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca-key.pem (1675 bytes)
	I1109 14:53:36.744638  589326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:53:36.744667  589326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:53:36.744701  589326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/key.pem (1679 bytes)
	I1109 14:53:36.744757  589326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem (1708 bytes)
	I1109 14:53:36.745717  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:53:36.802898  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:53:36.930012  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:53:35.475093  589106 out.go:252]   - Generating certificates and keys ...
	I1109 14:53:35.475254  589106 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 14:53:35.475404  589106 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 14:53:35.641992  589106 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 14:53:35.707786  589106 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1109 14:53:36.093312  589106 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1109 14:53:36.313335  589106 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 14:53:36.686736  589106 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 14:53:36.686989  589106 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-877855 localhost] and IPs [192.168.50.12 127.0.0.1 ::1]
	I1109 14:53:37.162098  589106 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 14:53:37.162330  589106 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-877855 localhost] and IPs [192.168.50.12 127.0.0.1 ::1]
	I1109 14:53:37.241601  589106 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 14:53:37.611638  589106 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 14:53:37.798200  589106 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 14:53:37.798311  589106 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 14:53:37.899373  589106 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 14:53:38.225812  589106 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1109 14:53:38.927147  589106 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 14:53:39.343340  589106 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 14:53:39.895452  589106 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 14:53:39.895948  589106 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 14:53:39.898632  589106 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 14:53:35.922992  589495 main.go:143] libmachine: waiting for domain to start...
	I1109 14:53:35.924647  589495 main.go:143] libmachine: domain is now running
	I1109 14:53:35.924676  589495 main.go:143] libmachine: waiting for IP...
	I1109 14:53:35.925846  589495 main.go:143] libmachine: domain kindnet-877855 has defined MAC address 52:54:00:1c:25:d5 in network mk-kindnet-877855
	I1109 14:53:35.926853  589495 main.go:143] libmachine: no network interface addresses found for domain kindnet-877855 (source=lease)
	I1109 14:53:35.926880  589495 main.go:143] libmachine: trying to list again with source=arp
	I1109 14:53:35.927387  589495 main.go:143] libmachine: unable to find current IP address of domain kindnet-877855 in network mk-kindnet-877855 (interfaces detected: [])
	I1109 14:53:35.927453  589495 retry.go:31] will retry after 193.095337ms: waiting for domain to come up
	I1109 14:53:36.122207  589495 main.go:143] libmachine: domain kindnet-877855 has defined MAC address 52:54:00:1c:25:d5 in network mk-kindnet-877855
	I1109 14:53:36.123194  589495 main.go:143] libmachine: no network interface addresses found for domain kindnet-877855 (source=lease)
	I1109 14:53:36.123242  589495 main.go:143] libmachine: trying to list again with source=arp
	I1109 14:53:36.123875  589495 main.go:143] libmachine: unable to find current IP address of domain kindnet-877855 in network mk-kindnet-877855 (interfaces detected: [])
	I1109 14:53:36.123928  589495 retry.go:31] will retry after 352.484391ms: waiting for domain to come up
	I1109 14:53:36.478936  589495 main.go:143] libmachine: domain kindnet-877855 has defined MAC address 52:54:00:1c:25:d5 in network mk-kindnet-877855
	I1109 14:53:36.479857  589495 main.go:143] libmachine: no network interface addresses found for domain kindnet-877855 (source=lease)
	I1109 14:53:36.479901  589495 main.go:143] libmachine: trying to list again with source=arp
	I1109 14:53:36.480451  589495 main.go:143] libmachine: unable to find current IP address of domain kindnet-877855 in network mk-kindnet-877855 (interfaces detected: [])
	I1109 14:53:36.480504  589495 retry.go:31] will retry after 350.862438ms: waiting for domain to come up
	I1109 14:53:36.833464  589495 main.go:143] libmachine: domain kindnet-877855 has defined MAC address 52:54:00:1c:25:d5 in network mk-kindnet-877855
	I1109 14:53:36.834331  589495 main.go:143] libmachine: no network interface addresses found for domain kindnet-877855 (source=lease)
	I1109 14:53:36.834354  589495 main.go:143] libmachine: trying to list again with source=arp
	I1109 14:53:36.834850  589495 main.go:143] libmachine: unable to find current IP address of domain kindnet-877855 in network mk-kindnet-877855 (interfaces detected: [])
	I1109 14:53:36.834893  589495 retry.go:31] will retry after 572.965646ms: waiting for domain to come up
	I1109 14:53:37.410128  589495 main.go:143] libmachine: domain kindnet-877855 has defined MAC address 52:54:00:1c:25:d5 in network mk-kindnet-877855
	I1109 14:53:37.411039  589495 main.go:143] libmachine: no network interface addresses found for domain kindnet-877855 (source=lease)
	I1109 14:53:37.411070  589495 main.go:143] libmachine: trying to list again with source=arp
	I1109 14:53:37.411536  589495 main.go:143] libmachine: unable to find current IP address of domain kindnet-877855 in network mk-kindnet-877855 (interfaces detected: [])
	I1109 14:53:37.411584  589495 retry.go:31] will retry after 466.01613ms: waiting for domain to come up
	I1109 14:53:37.879279  589495 main.go:143] libmachine: domain kindnet-877855 has defined MAC address 52:54:00:1c:25:d5 in network mk-kindnet-877855
	I1109 14:53:37.880263  589495 main.go:143] libmachine: no network interface addresses found for domain kindnet-877855 (source=lease)
	I1109 14:53:37.880297  589495 main.go:143] libmachine: trying to list again with source=arp
	I1109 14:53:37.880868  589495 main.go:143] libmachine: unable to find current IP address of domain kindnet-877855 in network mk-kindnet-877855 (interfaces detected: [])
	I1109 14:53:37.880932  589495 retry.go:31] will retry after 595.157924ms: waiting for domain to come up
	I1109 14:53:38.478392  589495 main.go:143] libmachine: domain kindnet-877855 has defined MAC address 52:54:00:1c:25:d5 in network mk-kindnet-877855
	I1109 14:53:38.479624  589495 main.go:143] libmachine: no network interface addresses found for domain kindnet-877855 (source=lease)
	I1109 14:53:38.479658  589495 main.go:143] libmachine: trying to list again with source=arp
	I1109 14:53:38.480272  589495 main.go:143] libmachine: unable to find current IP address of domain kindnet-877855 in network mk-kindnet-877855 (interfaces detected: [])
	I1109 14:53:38.480322  589495 retry.go:31] will retry after 727.916196ms: waiting for domain to come up
	I1109 14:53:39.210883  589495 main.go:143] libmachine: domain kindnet-877855 has defined MAC address 52:54:00:1c:25:d5 in network mk-kindnet-877855
	I1109 14:53:39.211957  589495 main.go:143] libmachine: no network interface addresses found for domain kindnet-877855 (source=lease)
	I1109 14:53:39.211985  589495 main.go:143] libmachine: trying to list again with source=arp
	I1109 14:53:39.212422  589495 main.go:143] libmachine: unable to find current IP address of domain kindnet-877855 in network mk-kindnet-877855 (interfaces detected: [])
	I1109 14:53:39.212489  589495 retry.go:31] will retry after 937.951447ms: waiting for domain to come up
	I1109 14:53:40.151829  589495 main.go:143] libmachine: domain kindnet-877855 has defined MAC address 52:54:00:1c:25:d5 in network mk-kindnet-877855
	I1109 14:53:40.152878  589495 main.go:143] libmachine: no network interface addresses found for domain kindnet-877855 (source=lease)
	I1109 14:53:40.152905  589495 main.go:143] libmachine: trying to list again with source=arp
	I1109 14:53:40.153376  589495 main.go:143] libmachine: unable to find current IP address of domain kindnet-877855 in network mk-kindnet-877855 (interfaces detected: [])
	I1109 14:53:40.153427  589495 retry.go:31] will retry after 1.325402555s: waiting for domain to come up
	I1109 14:53:37.038562  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:53:37.132415  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/pause-750355/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1109 14:53:37.230643  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/pause-750355/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:53:37.344299  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/pause-750355/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:53:37.428591  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/pause-750355/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:53:37.508289  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem --> /usr/share/ca-certificates/5534732.pem (1708 bytes)
	I1109 14:53:37.601954  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:53:37.699927  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/certs/553473.pem --> /usr/share/ca-certificates/553473.pem (1338 bytes)
	I1109 14:53:37.790527  589326 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:53:37.849189  589326 ssh_runner.go:195] Run: openssl version
	I1109 14:53:37.863966  589326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:53:37.899063  589326 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:53:37.909975  589326 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:53:37.910059  589326 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:53:37.925755  589326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:53:37.951822  589326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/553473.pem && ln -fs /usr/share/ca-certificates/553473.pem /etc/ssl/certs/553473.pem"
	I1109 14:53:37.986396  589326 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/553473.pem
	I1109 14:53:38.002761  589326 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:42 /usr/share/ca-certificates/553473.pem
	I1109 14:53:38.002885  589326 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/553473.pem
	I1109 14:53:38.018873  589326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/553473.pem /etc/ssl/certs/51391683.0"
	I1109 14:53:38.053748  589326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5534732.pem && ln -fs /usr/share/ca-certificates/5534732.pem /etc/ssl/certs/5534732.pem"
	I1109 14:53:38.080199  589326 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5534732.pem
	I1109 14:53:38.096454  589326 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:42 /usr/share/ca-certificates/5534732.pem
	I1109 14:53:38.096542  589326 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5534732.pem
	I1109 14:53:38.113575  589326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5534732.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:53:38.139320  589326 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:53:38.148439  589326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:53:38.162546  589326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:53:38.180182  589326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:53:38.194655  589326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:53:38.209689  589326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:53:38.225878  589326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:53:38.242224  589326 kubeadm.go:401] StartCluster: {Name:pause-750355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-750355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.177 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:53:38.242380  589326 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:53:38.242485  589326 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:53:38.336888  589326 cri.go:89] found id: "45947c6dce347ae279e4132072fb89c1b0522e1b326178274424222c2588321f"
	I1109 14:53:38.336936  589326 cri.go:89] found id: "75289d4097d96308d69b2aaecdc6ec6132f28173770c61a0750ebca515cf1c7e"
	I1109 14:53:38.336944  589326 cri.go:89] found id: "14fc4c3df613902789c27b68b1a5733c47ba7f7489099ab5c477b0483663c4aa"
	I1109 14:53:38.336950  589326 cri.go:89] found id: "59e2c2e4d7754e6a89e73e34e1fff37173c66fb5e41dec40edd897afc30be428"
	I1109 14:53:38.336953  589326 cri.go:89] found id: "21775c560e54e26138ce07b6c06d3e22037a109f355e39c9929ed04ace19914a"
	I1109 14:53:38.336959  589326 cri.go:89] found id: "604c10edad5bca905fe997db6a580b99ebde28984c8a549a484170114ee3ddba"
	I1109 14:53:38.336965  589326 cri.go:89] found id: "78781e9a162ec886cd6c744eaa944c53245caabf93ca6dceadcffcbe3c2ebd45"
	I1109 14:53:38.336970  589326 cri.go:89] found id: "76fa8220ee04417468321c6b207132c59ea2361bc897036d67d13cd60c74934d"
	I1109 14:53:38.336976  589326 cri.go:89] found id: "58fe997c2cbccb9c9742a4120bf476b3f9d5a51772918e6de87c43fbeb1fb8fa"
	I1109 14:53:38.336988  589326 cri.go:89] found id: ""
	I1109 14:53:38.337060  589326 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-750355 -n pause-750355
helpers_test.go:269: (dbg) Run:  kubectl --context pause-750355 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-750355 -n pause-750355
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-750355 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-750355 logs -n 25: (1.90744567s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                          ARGS                                                                                                           │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ stopped-upgrade-667086 stop                                                                                                                                                                                             │ stopped-upgrade-667086    │ jenkins │ v1.32.0 │ 09 Nov 25 14:50 UTC │ 09 Nov 25 14:50 UTC │
	│ start   │ -p stopped-upgrade-667086 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                  │ stopped-upgrade-667086    │ jenkins │ v1.37.0 │ 09 Nov 25 14:50 UTC │ 09 Nov 25 14:51 UTC │
	│ stop    │ -p NoKubernetes-748314                                                                                                                                                                                                  │ NoKubernetes-748314       │ jenkins │ v1.37.0 │ 09 Nov 25 14:50 UTC │ 09 Nov 25 14:50 UTC │
	│ start   │ -p NoKubernetes-748314 --driver=kvm2  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-748314       │ jenkins │ v1.37.0 │ 09 Nov 25 14:50 UTC │ 09 Nov 25 14:51 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-353436 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ running-upgrade-353436    │ jenkins │ v1.37.0 │ 09 Nov 25 14:50 UTC │                     │
	│ delete  │ -p running-upgrade-353436                                                                                                                                                                                               │ running-upgrade-353436    │ jenkins │ v1.37.0 │ 09 Nov 25 14:50 UTC │ 09 Nov 25 14:50 UTC │
	│ start   │ -p cert-expiration-729640 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                    │ cert-expiration-729640    │ jenkins │ v1.37.0 │ 09 Nov 25 14:50 UTC │ 09 Nov 25 14:52 UTC │
	│ start   │ -p kubernetes-upgrade-699004 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                                                                                         │ kubernetes-upgrade-699004 │ jenkins │ v1.37.0 │ 09 Nov 25 14:51 UTC │                     │
	│ start   │ -p kubernetes-upgrade-699004 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                  │ kubernetes-upgrade-699004 │ jenkins │ v1.37.0 │ 09 Nov 25 14:51 UTC │ 09 Nov 25 14:52 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-667086 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ stopped-upgrade-667086    │ jenkins │ v1.37.0 │ 09 Nov 25 14:51 UTC │                     │
	│ delete  │ -p stopped-upgrade-667086                                                                                                                                                                                               │ stopped-upgrade-667086    │ jenkins │ v1.37.0 │ 09 Nov 25 14:51 UTC │ 09 Nov 25 14:51 UTC │
	│ ssh     │ -p NoKubernetes-748314 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-748314       │ jenkins │ v1.37.0 │ 09 Nov 25 14:51 UTC │                     │
	│ delete  │ -p NoKubernetes-748314                                                                                                                                                                                                  │ NoKubernetes-748314       │ jenkins │ v1.37.0 │ 09 Nov 25 14:51 UTC │ 09 Nov 25 14:51 UTC │
	│ start   │ -p pause-750355 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                 │ pause-750355              │ jenkins │ v1.37.0 │ 09 Nov 25 14:51 UTC │ 09 Nov 25 14:53 UTC │
	│ start   │ -p force-systemd-flag-936534 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                               │ force-systemd-flag-936534 │ jenkins │ v1.37.0 │ 09 Nov 25 14:51 UTC │ 09 Nov 25 14:53 UTC │
	│ delete  │ -p kubernetes-upgrade-699004                                                                                                                                                                                            │ kubernetes-upgrade-699004 │ jenkins │ v1.37.0 │ 09 Nov 25 14:52 UTC │ 09 Nov 25 14:52 UTC │
	│ start   │ -p cert-options-868897 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio │ cert-options-868897       │ jenkins │ v1.37.0 │ 09 Nov 25 14:52 UTC │ 09 Nov 25 14:53 UTC │
	│ ssh     │ force-systemd-flag-936534 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                    │ force-systemd-flag-936534 │ jenkins │ v1.37.0 │ 09 Nov 25 14:53 UTC │ 09 Nov 25 14:53 UTC │
	│ delete  │ -p force-systemd-flag-936534                                                                                                                                                                                            │ force-systemd-flag-936534 │ jenkins │ v1.37.0 │ 09 Nov 25 14:53 UTC │ 09 Nov 25 14:53 UTC │
	│ start   │ -p auto-877855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                                                                                   │ auto-877855               │ jenkins │ v1.37.0 │ 09 Nov 25 14:53 UTC │                     │
	│ start   │ -p pause-750355 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                          │ pause-750355              │ jenkins │ v1.37.0 │ 09 Nov 25 14:53 UTC │ 09 Nov 25 14:54 UTC │
	│ ssh     │ cert-options-868897 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                             │ cert-options-868897       │ jenkins │ v1.37.0 │ 09 Nov 25 14:53 UTC │ 09 Nov 25 14:53 UTC │
	│ ssh     │ -p cert-options-868897 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                           │ cert-options-868897       │ jenkins │ v1.37.0 │ 09 Nov 25 14:53 UTC │ 09 Nov 25 14:53 UTC │
	│ delete  │ -p cert-options-868897                                                                                                                                                                                                  │ cert-options-868897       │ jenkins │ v1.37.0 │ 09 Nov 25 14:53 UTC │ 09 Nov 25 14:53 UTC │
	│ start   │ -p kindnet-877855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                                                                                  │ kindnet-877855            │ jenkins │ v1.37.0 │ 09 Nov 25 14:53 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:53:30
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:53:30.734172  589495 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:53:30.734586  589495 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:53:30.734607  589495 out.go:374] Setting ErrFile to fd 2...
	I1109 14:53:30.734615  589495 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:53:30.735083  589495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
	I1109 14:53:30.736151  589495 out.go:368] Setting JSON to false
	I1109 14:53:30.737540  589495 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":74160,"bootTime":1762625851,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 14:53:30.737734  589495 start.go:143] virtualization: kvm guest
	I1109 14:53:30.740086  589495 out.go:179] * [kindnet-877855] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 14:53:30.741759  589495 notify.go:221] Checking for updates...
	I1109 14:53:30.741777  589495 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:53:30.744697  589495 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:53:30.746457  589495 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-549598/kubeconfig
	I1109 14:53:30.747999  589495 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 14:53:30.749444  589495 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 14:53:30.751001  589495 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:53:30.752958  589495 config.go:182] Loaded profile config "auto-877855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:53:30.753111  589495 config.go:182] Loaded profile config "cert-expiration-729640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:53:30.753217  589495 config.go:182] Loaded profile config "guest-746433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1109 14:53:30.753410  589495 config.go:182] Loaded profile config "pause-750355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:53:30.753551  589495 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:53:30.799585  589495 out.go:179] * Using the kvm2 driver based on user configuration
	I1109 14:53:30.801521  589495 start.go:309] selected driver: kvm2
	I1109 14:53:30.801558  589495 start.go:930] validating driver "kvm2" against <nil>
	I1109 14:53:30.801574  589495 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:53:30.802709  589495 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 14:53:30.803061  589495 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:53:30.803102  589495 cni.go:84] Creating CNI manager for "kindnet"
	I1109 14:53:30.803108  589495 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 14:53:30.803158  589495 start.go:353] cluster config:
	{Name:kindnet-877855 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-877855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:53:30.803283  589495 iso.go:125] acquiring lock: {Name:mk8f0547b4600c0b2d1e831f024251df19b55199 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:53:30.806238  589495 out.go:179] * Starting "kindnet-877855" primary control-plane node in "kindnet-877855" cluster
	I1109 14:53:30.807754  589495 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:53:30.807875  589495 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-549598/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1109 14:53:30.807896  589495 cache.go:65] Caching tarball of preloaded images
	I1109 14:53:30.808055  589495 preload.go:238] Found /home/jenkins/minikube-integration/21139-549598/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1109 14:53:30.808072  589495 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:53:30.808194  589495 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/kindnet-877855/config.json ...
	I1109 14:53:30.808217  589495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/kindnet-877855/config.json: {Name:mke9fcd22a404f8037183a69c2c1c8c63d826560 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:53:30.808393  589495 start.go:360] acquireMachinesLock for kindnet-877855: {Name:mkb2a0b4add9a99b18ce9ab72b74eb5b4fda0e0a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1109 14:53:32.848495  589495 start.go:364] duration metric: took 2.040051873s to acquireMachinesLock for "kindnet-877855"
	I1109 14:53:32.848634  589495 start.go:93] Provisioning new machine with config: &{Name:kindnet-877855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.34.1 ClusterName:kindnet-877855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:53:32.848849  589495 start.go:125] createHost starting for "" (driver="kvm2")
	I1109 14:53:30.266813  589106 crio.go:462] duration metric: took 2.267401499s to copy over tarball
	I1109 14:53:30.266985  589106 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1109 14:53:32.335590  589106 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.068540919s)
	I1109 14:53:32.335618  589106 crio.go:469] duration metric: took 2.068753812s to extract the tarball
	I1109 14:53:32.335627  589106 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1109 14:53:32.381849  589106 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:53:32.438314  589106 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:53:32.438345  589106 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:53:32.438363  589106 kubeadm.go:935] updating node { 192.168.50.12 8443 v1.34.1 crio true true} ...
	I1109 14:53:32.438502  589106 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-877855 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-877855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:53:32.438598  589106 ssh_runner.go:195] Run: crio config
	I1109 14:53:32.504630  589106 cni.go:84] Creating CNI manager for ""
	I1109 14:53:32.504679  589106 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1109 14:53:32.504712  589106 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:53:32.504752  589106 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.12 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-877855 NodeName:auto-877855 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:53:32.505030  589106 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-877855"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.12"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.12"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:53:32.505118  589106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:53:32.522628  589106 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:53:32.522728  589106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:53:32.539299  589106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I1109 14:53:32.567459  589106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:53:32.594058  589106 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1109 14:53:32.628113  589106 ssh_runner.go:195] Run: grep 192.168.50.12	control-plane.minikube.internal$ /etc/hosts
	I1109 14:53:32.636674  589106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.12	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:53:32.660195  589106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:53:32.878314  589106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:53:32.923282  589106 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855 for IP: 192.168.50.12
	I1109 14:53:32.923314  589106 certs.go:195] generating shared ca certs ...
	I1109 14:53:32.923332  589106 certs.go:227] acquiring lock for ca certs: {Name:mkc766226c1ec8ac0cc61519ae61374bb0aa3b29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:53:32.923505  589106 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key
	I1109 14:53:32.923564  589106 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key
	I1109 14:53:32.923578  589106 certs.go:257] generating profile certs ...
	I1109 14:53:32.923639  589106 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.key
	I1109 14:53:32.923654  589106 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.crt with IP's: []
	I1109 14:53:33.282206  589106 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.crt ...
	I1109 14:53:33.282244  589106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.crt: {Name:mke4eebdd3814f81479beef090b4209b5daba63e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:53:33.282522  589106 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.key ...
	I1109 14:53:33.282548  589106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.key: {Name:mkabf1e0059fcdf293a9fa843cad66ef44313960 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:53:33.282701  589106 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/apiserver.key.f9b0f22e
	I1109 14:53:33.282727  589106 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/apiserver.crt.f9b0f22e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.12]
	I1109 14:53:33.515763  589106 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/apiserver.crt.f9b0f22e ...
	I1109 14:53:33.515815  589106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/apiserver.crt.f9b0f22e: {Name:mka83b73468a9c582a3825decb393050b76eaa0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:53:33.516064  589106 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/apiserver.key.f9b0f22e ...
	I1109 14:53:33.516092  589106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/apiserver.key.f9b0f22e: {Name:mkde324edebe62223e037a2e23b064e1b0be827f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:53:33.516247  589106 certs.go:382] copying /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/apiserver.crt.f9b0f22e -> /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/apiserver.crt
	I1109 14:53:33.516352  589106 certs.go:386] copying /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/apiserver.key.f9b0f22e -> /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/apiserver.key
	I1109 14:53:33.516412  589106 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/proxy-client.key
	I1109 14:53:33.516428  589106 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/proxy-client.crt with IP's: []
	I1109 14:53:34.151516  589106 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/proxy-client.crt ...
	I1109 14:53:34.151551  589106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/proxy-client.crt: {Name:mk6a42b02ba3c7f94b379f0ac8ae2dea74b157c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:53:34.151752  589106 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/proxy-client.key ...
	I1109 14:53:34.151765  589106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/proxy-client.key: {Name:mk1b9cd950e1f386d59f8be4c220f7646488c142 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:53:34.151998  589106 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/553473.pem (1338 bytes)
	W1109 14:53:34.152037  589106 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-549598/.minikube/certs/553473_empty.pem, impossibly tiny 0 bytes
	I1109 14:53:34.152048  589106 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca-key.pem (1675 bytes)
	I1109 14:53:34.152069  589106 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:53:34.152095  589106 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:53:34.152118  589106 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/key.pem (1679 bytes)
	I1109 14:53:34.152156  589106 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem (1708 bytes)
	I1109 14:53:34.152729  589106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:53:34.196758  589106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:53:34.247466  589106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:53:34.299706  589106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:53:34.346757  589106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1109 14:53:34.394674  589106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:53:34.462187  589106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:53:34.513525  589106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:53:34.560963  589106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem --> /usr/share/ca-certificates/5534732.pem (1708 bytes)
	I1109 14:53:34.601748  589106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:53:34.645300  589106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/certs/553473.pem --> /usr/share/ca-certificates/553473.pem (1338 bytes)
	I1109 14:53:34.696010  589106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:53:34.723690  589106 ssh_runner.go:195] Run: openssl version
	I1109 14:53:34.735826  589106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:53:34.755212  589106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:53:34.762597  589106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:53:34.762721  589106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:53:34.772251  589106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:53:34.800931  589106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/553473.pem && ln -fs /usr/share/ca-certificates/553473.pem /etc/ssl/certs/553473.pem"
	I1109 14:53:34.822358  589106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/553473.pem
	I1109 14:53:34.830631  589106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:42 /usr/share/ca-certificates/553473.pem
	I1109 14:53:34.830723  589106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/553473.pem
	I1109 14:53:34.842969  589106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/553473.pem /etc/ssl/certs/51391683.0"
	I1109 14:53:34.864132  589106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5534732.pem && ln -fs /usr/share/ca-certificates/5534732.pem /etc/ssl/certs/5534732.pem"
	I1109 14:53:34.887233  589106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5534732.pem
	I1109 14:53:34.897921  589106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:42 /usr/share/ca-certificates/5534732.pem
	I1109 14:53:34.898026  589106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5534732.pem
	I1109 14:53:34.912186  589106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5534732.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:53:34.930933  589106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:53:34.937642  589106 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 14:53:34.937728  589106 kubeadm.go:401] StartCluster: {Name:auto-877855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Clu
sterName:auto-877855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.12 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:53:34.937860  589106 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:53:34.937934  589106 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:53:34.995418  589106 cri.go:89] found id: ""
	I1109 14:53:34.995530  589106 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:53:35.017298  589106 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 14:53:35.034774  589106 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 14:53:35.051931  589106 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 14:53:35.051972  589106 kubeadm.go:158] found existing configuration files:
	
	I1109 14:53:35.052074  589106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 14:53:35.068247  589106 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 14:53:35.068330  589106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 14:53:35.085254  589106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 14:53:35.100809  589106 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 14:53:35.100899  589106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 14:53:35.136727  589106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 14:53:35.159172  589106 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 14:53:35.159259  589106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 14:53:35.182180  589106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 14:53:35.202324  589106 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 14:53:35.202417  589106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 14:53:35.221428  589106 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1109 14:53:35.295472  589106 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1109 14:53:35.295601  589106 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 14:53:35.440031  589106 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 14:53:35.440202  589106 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 14:53:35.440387  589106 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1109 14:53:35.469371  589106 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 14:53:32.953892  589495 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1109 14:53:32.954287  589495 start.go:159] libmachine.API.Create for "kindnet-877855" (driver="kvm2")
	I1109 14:53:32.954345  589495 client.go:173] LocalClient.Create starting
	I1109 14:53:32.954486  589495 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem
	I1109 14:53:32.954550  589495 main.go:143] libmachine: Decoding PEM data...
	I1109 14:53:32.954576  589495 main.go:143] libmachine: Parsing certificate...
	I1109 14:53:32.954675  589495 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem
	I1109 14:53:32.954718  589495 main.go:143] libmachine: Decoding PEM data...
	I1109 14:53:32.954738  589495 main.go:143] libmachine: Parsing certificate...
	I1109 14:53:32.973379  589495 main.go:143] libmachine: creating domain...
	I1109 14:53:32.973400  589495 main.go:143] libmachine: creating network...
	I1109 14:53:32.975391  589495 main.go:143] libmachine: found existing default network
	I1109 14:53:32.975692  589495 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1109 14:53:32.976817  589495 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e1:59:f3} reservation:<nil>}
	I1109 14:53:32.978062  589495 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:47:13:27} reservation:<nil>}
	I1109 14:53:32.979035  589495 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:18:1c:85} reservation:<nil>}
	I1109 14:53:32.980198  589495 network.go:211] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:38:1c:49} reservation:<nil>}
	I1109 14:53:32.981251  589495 network.go:211] skipping subnet 192.168.83.0/24 that is taken: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName:virbr5 IfaceIPv4:192.168.83.1 IfaceMTU:1500 IfaceMAC:52:54:00:b2:5c:8f} reservation:<nil>}
	I1109 14:53:32.982826  589495 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d68550}
	I1109 14:53:32.982969  589495 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-kindnet-877855</name>
	  <dns enable='no'/>
	  <ip address='192.168.94.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.94.2' end='192.168.94.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1109 14:53:33.162660  589495 main.go:143] libmachine: creating private network mk-kindnet-877855 192.168.94.0/24...
	I1109 14:53:33.280165  589495 main.go:143] libmachine: private network mk-kindnet-877855 192.168.94.0/24 created
	I1109 14:53:33.280525  589495 main.go:143] libmachine: <network>
	  <name>mk-kindnet-877855</name>
	  <uuid>0bd0bc54-3235-4ba3-a183-c5a9b6be600b</uuid>
	  <bridge name='virbr6' stp='on' delay='0'/>
	  <mac address='52:54:00:99:60:b3'/>
	  <dns enable='no'/>
	  <ip address='192.168.94.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.94.2' end='192.168.94.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1109 14:53:33.280567  589495 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21139-549598/.minikube/machines/kindnet-877855 ...
	I1109 14:53:33.280596  589495 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21139-549598/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso
	I1109 14:53:33.280608  589495 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 14:53:33.280706  589495 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21139-549598/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21139-549598/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso...
	I1109 14:53:33.597765  589495 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21139-549598/.minikube/machines/kindnet-877855/id_rsa...
	I1109 14:53:33.650876  589495 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21139-549598/.minikube/machines/kindnet-877855/kindnet-877855.rawdisk...
	I1109 14:53:33.650935  589495 main.go:143] libmachine: Writing magic tar header
	I1109 14:53:33.650975  589495 main.go:143] libmachine: Writing SSH key tar header
	I1109 14:53:33.651096  589495 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21139-549598/.minikube/machines/kindnet-877855 ...
	I1109 14:53:33.651209  589495 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21139-549598/.minikube/machines/kindnet-877855
	I1109 14:53:33.651252  589495 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21139-549598/.minikube/machines/kindnet-877855 (perms=drwx------)
	I1109 14:53:33.651271  589495 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21139-549598/.minikube/machines
	I1109 14:53:33.651292  589495 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21139-549598/.minikube/machines (perms=drwxr-xr-x)
	I1109 14:53:33.651309  589495 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 14:53:33.651325  589495 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21139-549598/.minikube (perms=drwxr-xr-x)
	I1109 14:53:33.651344  589495 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21139-549598
	I1109 14:53:33.651359  589495 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21139-549598 (perms=drwxrwxr-x)
	I1109 14:53:33.651377  589495 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1109 14:53:33.651393  589495 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1109 14:53:33.651409  589495 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1109 14:53:33.651430  589495 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1109 14:53:33.651449  589495 main.go:143] libmachine: checking permissions on dir: /home
	I1109 14:53:33.651494  589495 main.go:143] libmachine: skipping /home - not owner
	I1109 14:53:33.651510  589495 main.go:143] libmachine: defining domain...
	I1109 14:53:33.653585  589495 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>kindnet-877855</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21139-549598/.minikube/machines/kindnet-877855/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21139-549598/.minikube/machines/kindnet-877855/kindnet-877855.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-kindnet-877855'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1109 14:53:33.775888  589495 main.go:143] libmachine: domain kindnet-877855 has defined MAC address 52:54:00:12:a0:39 in network default
	I1109 14:53:33.777154  589495 main.go:143] libmachine: domain kindnet-877855 has defined MAC address 52:54:00:1c:25:d5 in network mk-kindnet-877855
	I1109 14:53:33.777196  589495 main.go:143] libmachine: starting domain...
	I1109 14:53:33.777207  589495 main.go:143] libmachine: ensuring networks are active...
	I1109 14:53:33.778984  589495 main.go:143] libmachine: Ensuring network default is active
	I1109 14:53:33.780335  589495 main.go:143] libmachine: Ensuring network mk-kindnet-877855 is active
	I1109 14:53:33.781486  589495 main.go:143] libmachine: getting domain XML...
	I1109 14:53:33.783137  589495 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>kindnet-877855</name>
	  <uuid>8165d45a-d497-4c2c-8c31-72087adf09aa</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21139-549598/.minikube/machines/kindnet-877855/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21139-549598/.minikube/machines/kindnet-877855/kindnet-877855.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:1c:25:d5'/>
	      <source network='mk-kindnet-877855'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:12:a0:39'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1109 14:53:32.514159  589326 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:53:32.514195  589326 machine.go:97] duration metric: took 6.997864748s to provisionDockerMachine
	I1109 14:53:32.514211  589326 start.go:293] postStartSetup for "pause-750355" (driver="kvm2")
	I1109 14:53:32.514241  589326 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:53:32.514343  589326 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:53:32.518330  589326 main.go:143] libmachine: domain pause-750355 has defined MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.519023  589326 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:f7:f7", ip: ""} in network mk-pause-750355: {Iface:virbr3 ExpiryTime:2025-11-09 15:52:11 +0000 UTC Type:0 Mac:52:54:00:55:f7:f7 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:pause-750355 Clientid:01:52:54:00:55:f7:f7}
	I1109 14:53:32.519069  589326 main.go:143] libmachine: domain pause-750355 has defined IP address 192.168.61.177 and MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.519325  589326 sshutil.go:53] new ssh client: &{IP:192.168.61.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/pause-750355/id_rsa Username:docker}
	I1109 14:53:32.617883  589326 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:53:32.624741  589326 info.go:137] Remote host: Buildroot 2025.02
	I1109 14:53:32.624826  589326 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-549598/.minikube/addons for local assets ...
	I1109 14:53:32.624922  589326 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-549598/.minikube/files for local assets ...
	I1109 14:53:32.625068  589326 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem -> 5534732.pem in /etc/ssl/certs
	I1109 14:53:32.625275  589326 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:53:32.646938  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem --> /etc/ssl/certs/5534732.pem (1708 bytes)
	I1109 14:53:32.702728  589326 start.go:296] duration metric: took 188.497538ms for postStartSetup
	I1109 14:53:32.702787  589326 fix.go:56] duration metric: took 7.192104702s for fixHost
	I1109 14:53:32.707025  589326 main.go:143] libmachine: domain pause-750355 has defined MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.707632  589326 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:f7:f7", ip: ""} in network mk-pause-750355: {Iface:virbr3 ExpiryTime:2025-11-09 15:52:11 +0000 UTC Type:0 Mac:52:54:00:55:f7:f7 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:pause-750355 Clientid:01:52:54:00:55:f7:f7}
	I1109 14:53:32.707664  589326 main.go:143] libmachine: domain pause-750355 has defined IP address 192.168.61.177 and MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.707938  589326 main.go:143] libmachine: Using SSH client type: native
	I1109 14:53:32.708236  589326 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I1109 14:53:32.708255  589326 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1109 14:53:32.848258  589326 main.go:143] libmachine: SSH cmd err, output: <nil>: 1762700012.843158872
	
	I1109 14:53:32.848293  589326 fix.go:216] guest clock: 1762700012.843158872
	I1109 14:53:32.848302  589326 fix.go:229] Guest: 2025-11-09 14:53:32.843158872 +0000 UTC Remote: 2025-11-09 14:53:32.702805276 +0000 UTC m=+10.819470767 (delta=140.353596ms)
	I1109 14:53:32.848332  589326 fix.go:200] guest clock delta is within tolerance: 140.353596ms
	I1109 14:53:32.848341  589326 start.go:83] releasing machines lock for "pause-750355", held for 7.33770666s
	I1109 14:53:32.852953  589326 main.go:143] libmachine: domain pause-750355 has defined MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.853612  589326 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:f7:f7", ip: ""} in network mk-pause-750355: {Iface:virbr3 ExpiryTime:2025-11-09 15:52:11 +0000 UTC Type:0 Mac:52:54:00:55:f7:f7 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:pause-750355 Clientid:01:52:54:00:55:f7:f7}
	I1109 14:53:32.853652  589326 main.go:143] libmachine: domain pause-750355 has defined IP address 192.168.61.177 and MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.854578  589326 ssh_runner.go:195] Run: cat /version.json
	I1109 14:53:32.854645  589326 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:53:32.858821  589326 main.go:143] libmachine: domain pause-750355 has defined MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.859048  589326 main.go:143] libmachine: domain pause-750355 has defined MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.859461  589326 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:f7:f7", ip: ""} in network mk-pause-750355: {Iface:virbr3 ExpiryTime:2025-11-09 15:52:11 +0000 UTC Type:0 Mac:52:54:00:55:f7:f7 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:pause-750355 Clientid:01:52:54:00:55:f7:f7}
	I1109 14:53:32.859491  589326 main.go:143] libmachine: domain pause-750355 has defined IP address 192.168.61.177 and MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.859702  589326 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:f7:f7", ip: ""} in network mk-pause-750355: {Iface:virbr3 ExpiryTime:2025-11-09 15:52:11 +0000 UTC Type:0 Mac:52:54:00:55:f7:f7 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:pause-750355 Clientid:01:52:54:00:55:f7:f7}
	I1109 14:53:32.859764  589326 sshutil.go:53] new ssh client: &{IP:192.168.61.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/pause-750355/id_rsa Username:docker}
	I1109 14:53:32.859784  589326 main.go:143] libmachine: domain pause-750355 has defined IP address 192.168.61.177 and MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:32.860173  589326 sshutil.go:53] new ssh client: &{IP:192.168.61.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/pause-750355/id_rsa Username:docker}
	I1109 14:53:32.982848  589326 ssh_runner.go:195] Run: systemctl --version
	I1109 14:53:32.992834  589326 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:53:33.167243  589326 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:53:33.184281  589326 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:53:33.184428  589326 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:53:33.199770  589326 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:53:33.199835  589326 start.go:496] detecting cgroup driver to use...
	I1109 14:53:33.199924  589326 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:53:33.228861  589326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:53:33.253162  589326 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:53:33.253247  589326 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:53:33.277765  589326 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:53:33.306681  589326 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:53:33.547679  589326 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:53:33.789112  589326 docker.go:234] disabling docker service ...
	I1109 14:53:33.789192  589326 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:53:33.835061  589326 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:53:33.859423  589326 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:53:34.095668  589326 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:53:34.348950  589326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:53:34.370306  589326 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:53:34.406034  589326 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:53:34.406113  589326 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:53:34.424583  589326 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:53:34.424702  589326 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:53:34.444978  589326 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:53:34.503318  589326 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:53:34.536038  589326 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:53:34.557024  589326 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:53:34.579042  589326 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:53:34.611977  589326 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:53:34.643050  589326 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:53:34.662589  589326 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:53:34.679267  589326 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:53:35.016525  589326 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:53:35.469636  589326 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:53:35.469725  589326 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:53:35.484191  589326 start.go:564] Will wait 60s for crictl version
	I1109 14:53:35.484304  589326 ssh_runner.go:195] Run: which crictl
	I1109 14:53:35.498865  589326 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1109 14:53:35.624105  589326 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1109 14:53:35.624234  589326 ssh_runner.go:195] Run: crio --version
	I1109 14:53:35.723482  589326 ssh_runner.go:195] Run: crio --version
	I1109 14:53:35.811809  589326 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1109 14:53:35.818067  589326 main.go:143] libmachine: domain pause-750355 has defined MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:35.818967  589326 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:f7:f7", ip: ""} in network mk-pause-750355: {Iface:virbr3 ExpiryTime:2025-11-09 15:52:11 +0000 UTC Type:0 Mac:52:54:00:55:f7:f7 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:pause-750355 Clientid:01:52:54:00:55:f7:f7}
	I1109 14:53:35.819010  589326 main.go:143] libmachine: domain pause-750355 has defined IP address 192.168.61.177 and MAC address 52:54:00:55:f7:f7 in network mk-pause-750355
	I1109 14:53:35.819301  589326 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1109 14:53:35.831482  589326 kubeadm.go:884] updating cluster {Name:pause-750355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-750355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.177 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:53:35.831723  589326 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:53:35.831834  589326 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:53:35.984320  589326 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:53:35.984356  589326 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:53:35.984428  589326 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:53:36.087631  589326 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:53:36.087665  589326 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:53:36.087676  589326 kubeadm.go:935] updating node { 192.168.61.177 8443 v1.34.1 crio true true} ...
	I1109 14:53:36.087855  589326 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-750355 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.177
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-750355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:53:36.087968  589326 ssh_runner.go:195] Run: crio config
	I1109 14:53:36.214692  589326 cni.go:84] Creating CNI manager for ""
	I1109 14:53:36.214727  589326 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1109 14:53:36.214752  589326 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:53:36.214790  589326 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.177 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-750355 NodeName:pause-750355 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.177"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.177 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:53:36.215030  589326 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.177
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-750355"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.177"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.177"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:53:36.215131  589326 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:53:36.252652  589326 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:53:36.252755  589326 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:53:36.278942  589326 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1109 14:53:36.330921  589326 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:53:36.362721  589326 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1109 14:53:36.399186  589326 ssh_runner.go:195] Run: grep 192.168.61.177	control-plane.minikube.internal$ /etc/hosts
	I1109 14:53:36.407571  589326 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:53:36.697284  589326 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:53:36.743739  589326 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/pause-750355 for IP: 192.168.61.177
	I1109 14:53:36.743768  589326 certs.go:195] generating shared ca certs ...
	I1109 14:53:36.743788  589326 certs.go:227] acquiring lock for ca certs: {Name:mkc766226c1ec8ac0cc61519ae61374bb0aa3b29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:53:36.744005  589326 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key
	I1109 14:53:36.744085  589326 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key
	I1109 14:53:36.744113  589326 certs.go:257] generating profile certs ...
	I1109 14:53:36.744239  589326 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/pause-750355/client.key
	I1109 14:53:36.744328  589326 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/pause-750355/apiserver.key.0e71cea4
	I1109 14:53:36.744407  589326 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/pause-750355/proxy-client.key
	I1109 14:53:36.744547  589326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/553473.pem (1338 bytes)
	W1109 14:53:36.744588  589326 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-549598/.minikube/certs/553473_empty.pem, impossibly tiny 0 bytes
	I1109 14:53:36.744605  589326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca-key.pem (1675 bytes)
	I1109 14:53:36.744638  589326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:53:36.744667  589326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:53:36.744701  589326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/certs/key.pem (1679 bytes)
	I1109 14:53:36.744757  589326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem (1708 bytes)
	I1109 14:53:36.745717  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:53:36.802898  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:53:36.930012  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:53:35.475093  589106 out.go:252]   - Generating certificates and keys ...
	I1109 14:53:35.475254  589106 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 14:53:35.475404  589106 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 14:53:35.641992  589106 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 14:53:35.707786  589106 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1109 14:53:36.093312  589106 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1109 14:53:36.313335  589106 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 14:53:36.686736  589106 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 14:53:36.686989  589106 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-877855 localhost] and IPs [192.168.50.12 127.0.0.1 ::1]
	I1109 14:53:37.162098  589106 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 14:53:37.162330  589106 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-877855 localhost] and IPs [192.168.50.12 127.0.0.1 ::1]
	I1109 14:53:37.241601  589106 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 14:53:37.611638  589106 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 14:53:37.798200  589106 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 14:53:37.798311  589106 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 14:53:37.899373  589106 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 14:53:38.225812  589106 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1109 14:53:38.927147  589106 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 14:53:39.343340  589106 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 14:53:39.895452  589106 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 14:53:39.895948  589106 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 14:53:39.898632  589106 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 14:53:35.922992  589495 main.go:143] libmachine: waiting for domain to start...
	I1109 14:53:35.924647  589495 main.go:143] libmachine: domain is now running
	I1109 14:53:35.924676  589495 main.go:143] libmachine: waiting for IP...
	I1109 14:53:35.925846  589495 main.go:143] libmachine: domain kindnet-877855 has defined MAC address 52:54:00:1c:25:d5 in network mk-kindnet-877855
	I1109 14:53:35.926853  589495 main.go:143] libmachine: no network interface addresses found for domain kindnet-877855 (source=lease)
	I1109 14:53:35.926880  589495 main.go:143] libmachine: trying to list again with source=arp
	I1109 14:53:35.927387  589495 main.go:143] libmachine: unable to find current IP address of domain kindnet-877855 in network mk-kindnet-877855 (interfaces detected: [])
	I1109 14:53:35.927453  589495 retry.go:31] will retry after 193.095337ms: waiting for domain to come up
	I1109 14:53:36.122207  589495 main.go:143] libmachine: domain kindnet-877855 has defined MAC address 52:54:00:1c:25:d5 in network mk-kindnet-877855
	I1109 14:53:36.123194  589495 main.go:143] libmachine: no network interface addresses found for domain kindnet-877855 (source=lease)
	I1109 14:53:36.123242  589495 main.go:143] libmachine: trying to list again with source=arp
	I1109 14:53:36.123875  589495 main.go:143] libmachine: unable to find current IP address of domain kindnet-877855 in network mk-kindnet-877855 (interfaces detected: [])
	I1109 14:53:36.123928  589495 retry.go:31] will retry after 352.484391ms: waiting for domain to come up
	I1109 14:53:36.478936  589495 main.go:143] libmachine: domain kindnet-877855 has defined MAC address 52:54:00:1c:25:d5 in network mk-kindnet-877855
	I1109 14:53:36.479857  589495 main.go:143] libmachine: no network interface addresses found for domain kindnet-877855 (source=lease)
	I1109 14:53:36.479901  589495 main.go:143] libmachine: trying to list again with source=arp
	I1109 14:53:36.480451  589495 main.go:143] libmachine: unable to find current IP address of domain kindnet-877855 in network mk-kindnet-877855 (interfaces detected: [])
	I1109 14:53:36.480504  589495 retry.go:31] will retry after 350.862438ms: waiting for domain to come up
	I1109 14:53:36.833464  589495 main.go:143] libmachine: domain kindnet-877855 has defined MAC address 52:54:00:1c:25:d5 in network mk-kindnet-877855
	I1109 14:53:36.834331  589495 main.go:143] libmachine: no network interface addresses found for domain kindnet-877855 (source=lease)
	I1109 14:53:36.834354  589495 main.go:143] libmachine: trying to list again with source=arp
	I1109 14:53:36.834850  589495 main.go:143] libmachine: unable to find current IP address of domain kindnet-877855 in network mk-kindnet-877855 (interfaces detected: [])
	I1109 14:53:36.834893  589495 retry.go:31] will retry after 572.965646ms: waiting for domain to come up
	I1109 14:53:37.410128  589495 main.go:143] libmachine: domain kindnet-877855 has defined MAC address 52:54:00:1c:25:d5 in network mk-kindnet-877855
	I1109 14:53:37.411039  589495 main.go:143] libmachine: no network interface addresses found for domain kindnet-877855 (source=lease)
	I1109 14:53:37.411070  589495 main.go:143] libmachine: trying to list again with source=arp
	I1109 14:53:37.411536  589495 main.go:143] libmachine: unable to find current IP address of domain kindnet-877855 in network mk-kindnet-877855 (interfaces detected: [])
	I1109 14:53:37.411584  589495 retry.go:31] will retry after 466.01613ms: waiting for domain to come up
	I1109 14:53:37.879279  589495 main.go:143] libmachine: domain kindnet-877855 has defined MAC address 52:54:00:1c:25:d5 in network mk-kindnet-877855
	I1109 14:53:37.880263  589495 main.go:143] libmachine: no network interface addresses found for domain kindnet-877855 (source=lease)
	I1109 14:53:37.880297  589495 main.go:143] libmachine: trying to list again with source=arp
	I1109 14:53:37.880868  589495 main.go:143] libmachine: unable to find current IP address of domain kindnet-877855 in network mk-kindnet-877855 (interfaces detected: [])
	I1109 14:53:37.880932  589495 retry.go:31] will retry after 595.157924ms: waiting for domain to come up
	I1109 14:53:38.478392  589495 main.go:143] libmachine: domain kindnet-877855 has defined MAC address 52:54:00:1c:25:d5 in network mk-kindnet-877855
	I1109 14:53:38.479624  589495 main.go:143] libmachine: no network interface addresses found for domain kindnet-877855 (source=lease)
	I1109 14:53:38.479658  589495 main.go:143] libmachine: trying to list again with source=arp
	I1109 14:53:38.480272  589495 main.go:143] libmachine: unable to find current IP address of domain kindnet-877855 in network mk-kindnet-877855 (interfaces detected: [])
	I1109 14:53:38.480322  589495 retry.go:31] will retry after 727.916196ms: waiting for domain to come up
	I1109 14:53:39.210883  589495 main.go:143] libmachine: domain kindnet-877855 has defined MAC address 52:54:00:1c:25:d5 in network mk-kindnet-877855
	I1109 14:53:39.211957  589495 main.go:143] libmachine: no network interface addresses found for domain kindnet-877855 (source=lease)
	I1109 14:53:39.211985  589495 main.go:143] libmachine: trying to list again with source=arp
	I1109 14:53:39.212422  589495 main.go:143] libmachine: unable to find current IP address of domain kindnet-877855 in network mk-kindnet-877855 (interfaces detected: [])
	I1109 14:53:39.212489  589495 retry.go:31] will retry after 937.951447ms: waiting for domain to come up
	I1109 14:53:40.151829  589495 main.go:143] libmachine: domain kindnet-877855 has defined MAC address 52:54:00:1c:25:d5 in network mk-kindnet-877855
	I1109 14:53:40.152878  589495 main.go:143] libmachine: no network interface addresses found for domain kindnet-877855 (source=lease)
	I1109 14:53:40.152905  589495 main.go:143] libmachine: trying to list again with source=arp
	I1109 14:53:40.153376  589495 main.go:143] libmachine: unable to find current IP address of domain kindnet-877855 in network mk-kindnet-877855 (interfaces detected: [])
	I1109 14:53:40.153427  589495 retry.go:31] will retry after 1.325402555s: waiting for domain to come up
	I1109 14:53:37.038562  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:53:37.132415  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/pause-750355/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1109 14:53:37.230643  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/pause-750355/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:53:37.344299  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/pause-750355/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:53:37.428591  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/pause-750355/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:53:37.508289  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/ssl/certs/5534732.pem --> /usr/share/ca-certificates/5534732.pem (1708 bytes)
	I1109 14:53:37.601954  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:53:37.699927  589326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-549598/.minikube/certs/553473.pem --> /usr/share/ca-certificates/553473.pem (1338 bytes)
	I1109 14:53:37.790527  589326 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:53:37.849189  589326 ssh_runner.go:195] Run: openssl version
	I1109 14:53:37.863966  589326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:53:37.899063  589326 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:53:37.909975  589326 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:53:37.910059  589326 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:53:37.925755  589326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:53:37.951822  589326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/553473.pem && ln -fs /usr/share/ca-certificates/553473.pem /etc/ssl/certs/553473.pem"
	I1109 14:53:37.986396  589326 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/553473.pem
	I1109 14:53:38.002761  589326 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:42 /usr/share/ca-certificates/553473.pem
	I1109 14:53:38.002885  589326 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/553473.pem
	I1109 14:53:38.018873  589326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/553473.pem /etc/ssl/certs/51391683.0"
	I1109 14:53:38.053748  589326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5534732.pem && ln -fs /usr/share/ca-certificates/5534732.pem /etc/ssl/certs/5534732.pem"
	I1109 14:53:38.080199  589326 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5534732.pem
	I1109 14:53:38.096454  589326 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:42 /usr/share/ca-certificates/5534732.pem
	I1109 14:53:38.096542  589326 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5534732.pem
	I1109 14:53:38.113575  589326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5534732.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:53:38.139320  589326 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:53:38.148439  589326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:53:38.162546  589326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:53:38.180182  589326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:53:38.194655  589326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:53:38.209689  589326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:53:38.225878  589326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:53:38.242224  589326 kubeadm.go:401] StartCluster: {Name:pause-750355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-750355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.177 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:53:38.242380  589326 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:53:38.242485  589326 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:53:38.336888  589326 cri.go:89] found id: "45947c6dce347ae279e4132072fb89c1b0522e1b326178274424222c2588321f"
	I1109 14:53:38.336936  589326 cri.go:89] found id: "75289d4097d96308d69b2aaecdc6ec6132f28173770c61a0750ebca515cf1c7e"
	I1109 14:53:38.336944  589326 cri.go:89] found id: "14fc4c3df613902789c27b68b1a5733c47ba7f7489099ab5c477b0483663c4aa"
	I1109 14:53:38.336950  589326 cri.go:89] found id: "59e2c2e4d7754e6a89e73e34e1fff37173c66fb5e41dec40edd897afc30be428"
	I1109 14:53:38.336953  589326 cri.go:89] found id: "21775c560e54e26138ce07b6c06d3e22037a109f355e39c9929ed04ace19914a"
	I1109 14:53:38.336959  589326 cri.go:89] found id: "604c10edad5bca905fe997db6a580b99ebde28984c8a549a484170114ee3ddba"
	I1109 14:53:38.336965  589326 cri.go:89] found id: "78781e9a162ec886cd6c744eaa944c53245caabf93ca6dceadcffcbe3c2ebd45"
	I1109 14:53:38.336970  589326 cri.go:89] found id: "76fa8220ee04417468321c6b207132c59ea2361bc897036d67d13cd60c74934d"
	I1109 14:53:38.336976  589326 cri.go:89] found id: "58fe997c2cbccb9c9742a4120bf476b3f9d5a51772918e6de87c43fbeb1fb8fa"
	I1109 14:53:38.336988  589326 cri.go:89] found id: ""
	I1109 14:53:38.337060  589326 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-750355 -n pause-750355
helpers_test.go:269: (dbg) Run:  kubectl --context pause-750355 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (56.49s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-746433 ssh "df -t ext4 /data | grep /data"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-746433 ssh "df -t ext4 /data | grep /data": context deadline exceeded (1.922µs)
iso_test.go:99: failed to verify existence of "/data" mount. args "out/minikube-linux-amd64 -p guest-746433 ssh \"df -t ext4 /data | grep /data\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//data (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-746433 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-746433 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker": context deadline exceeded (183ns)
iso_test.go:99: failed to verify existence of "/var/lib/docker" mount. args "out/minikube-linux-amd64 -p guest-746433 ssh \"df -t ext4 /var/lib/docker | grep /var/lib/docker\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/docker (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-746433 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-746433 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni": context deadline exceeded (249ns)
iso_test.go:99: failed to verify existence of "/var/lib/cni" mount. args "out/minikube-linux-amd64 -p guest-746433 ssh \"df -t ext4 /var/lib/cni | grep /var/lib/cni\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/cni (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-746433 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-746433 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet": context deadline exceeded (202ns)
iso_test.go:99: failed to verify existence of "/var/lib/kubelet" mount. args "out/minikube-linux-amd64 -p guest-746433 ssh \"df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/kubelet (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-746433 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-746433 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube": context deadline exceeded (502ns)
iso_test.go:99: failed to verify existence of "/var/lib/minikube" mount. args "out/minikube-linux-amd64 -p guest-746433 ssh \"df -t ext4 /var/lib/minikube | grep /var/lib/minikube\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/minikube (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-746433 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-746433 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox": context deadline exceeded (200ns)
iso_test.go:99: failed to verify existence of "/var/lib/toolbox" mount. args "out/minikube-linux-amd64 -p guest-746433 ssh \"df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/toolbox (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-746433 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-746433 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker": context deadline exceeded (407ns)
iso_test.go:99: failed to verify existence of "/var/lib/boot2docker" mount. args "out/minikube-linux-amd64 -p guest-746433 ssh \"df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/boot2docker (0.00s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-746433 ssh "cat /version.json"
iso_test.go:106: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-746433 ssh "cat /version.json": context deadline exceeded (795ns)
iso_test.go:108: failed to read /version.json. args "out/minikube-linux-amd64 -p guest-746433 ssh \"cat /version.json\"": context deadline exceeded
--- FAIL: TestISOImage/VersionJSON (0.00s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-746433 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
iso_test.go:125: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-746433 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'": context deadline exceeded (281ns)
iso_test.go:127: failed to verify existence of "/sys/kernel/btf/vmlinux" file: args "out/minikube-linux-amd64 -p guest-746433 ssh \"test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'\"": context deadline exceeded
iso_test.go:131: expected file "/sys/kernel/btf/vmlinux" to exist, but it does not. BTF types are required for CO-RE eBPF programs; set CONFIG_DEBUG_INFO_BTF in kernel configuration.
--- FAIL: TestISOImage/eBPFSupport (0.00s)
E1109 15:02:34.228075  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/enable-default-cni-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (282/345)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.56
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.19
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.17
12 TestDownloadOnly/v1.34.1/json-events 3.52
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.19
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.18
21 TestBinaryMirror 0.71
22 TestOffline 115.28
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
27 TestAddons/Setup 145.12
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 9.71
35 TestAddons/parallel/Registry 46.67
36 TestAddons/parallel/RegistryCreds 0.85
38 TestAddons/parallel/InspektorGadget 11.98
39 TestAddons/parallel/MetricsServer 6.09
42 TestAddons/parallel/Headlamp 23.57
43 TestAddons/parallel/CloudSpanner 6.72
45 TestAddons/parallel/NvidiaDevicePlugin 7.16
46 TestAddons/parallel/Yakd 12.09
48 TestAddons/StoppedEnableDisable 86.1
49 TestCertOptions 79.02
50 TestCertExpiration 336.59
52 TestForceSystemdFlag 98.59
53 TestForceSystemdEnv 49.1
58 TestErrorSpam/setup 43.2
59 TestErrorSpam/start 0.38
60 TestErrorSpam/status 0.8
61 TestErrorSpam/pause 1.86
62 TestErrorSpam/unpause 2.48
63 TestErrorSpam/stop 5.52
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 89.05
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 36.26
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.81
75 TestFunctional/serial/CacheCmd/cache/add_local 1.29
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.89
80 TestFunctional/serial/CacheCmd/cache/delete 0.14
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
84 TestFunctional/serial/ComponentHealth 0.08
85 TestFunctional/serial/LogsCmd 1.82
86 TestFunctional/serial/LogsFileCmd 1.83
87 TestFunctional/serial/InvalidService 4.16
89 TestFunctional/parallel/ConfigCmd 0.54
91 TestFunctional/parallel/DryRun 0.26
92 TestFunctional/parallel/InternationalLanguage 0.13
93 TestFunctional/parallel/StatusCmd 0.75
98 TestFunctional/parallel/AddonsCmd 0.18
101 TestFunctional/parallel/SSHCmd 0.41
102 TestFunctional/parallel/CpCmd 1.5
104 TestFunctional/parallel/FileSync 0.18
105 TestFunctional/parallel/CertSync 1.43
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.49
113 TestFunctional/parallel/License 0.31
114 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
115 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
116 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
117 TestFunctional/parallel/Version/short 0.07
118 TestFunctional/parallel/Version/components 0.5
119 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
120 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
121 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
122 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
123 TestFunctional/parallel/ImageCommands/ImageBuild 3.48
124 TestFunctional/parallel/ImageCommands/Setup 0.49
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.94
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.93
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.08
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.55
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.88
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.6
142 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
143 TestFunctional/parallel/ProfileCmd/profile_list 0.34
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
145 TestFunctional/parallel/MountCmd/any-port 36.05
146 TestFunctional/parallel/MountCmd/specific-port 1.31
147 TestFunctional/parallel/MountCmd/VerifyCleanup 1.39
148 TestFunctional/parallel/ServiceCmd/List 1.22
149 TestFunctional/parallel/ServiceCmd/JSONOutput 1.22
153 TestFunctional/delete_echo-server_images 0.05
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 259.92
161 TestMultiControlPlane/serial/DeployApp 7.45
162 TestMultiControlPlane/serial/PingHostFromPods 1.69
163 TestMultiControlPlane/serial/AddWorkerNode 50.62
164 TestMultiControlPlane/serial/NodeLabels 0.09
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.82
166 TestMultiControlPlane/serial/CopyFile 12.6
167 TestMultiControlPlane/serial/StopSecondaryNode 81.15
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.59
169 TestMultiControlPlane/serial/RestartSecondaryNode 45.44
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.04
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 398.01
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.79
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.6
174 TestMultiControlPlane/serial/StopCluster 230.56
175 TestMultiControlPlane/serial/RestartCluster 115.61
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.58
177 TestMultiControlPlane/serial/AddSecondaryNode 84.03
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.79
183 TestJSONOutput/start/Command 89.05
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.87
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.78
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.28
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.29
211 TestMainNoArgs 0.07
212 TestMinikubeProfile 96.17
215 TestMountStart/serial/StartWithMountFirst 25.07
216 TestMountStart/serial/VerifyMountFirst 0.35
217 TestMountStart/serial/StartWithMountSecond 24.55
218 TestMountStart/serial/VerifyMountSecond 0.34
219 TestMountStart/serial/DeleteFirst 0.79
220 TestMountStart/serial/VerifyMountPostDelete 0.38
221 TestMountStart/serial/Stop 1.51
222 TestMountStart/serial/RestartStopped 22.89
223 TestMountStart/serial/VerifyMountPostStop 0.35
226 TestMultiNode/serial/FreshStart2Nodes 111.69
227 TestMultiNode/serial/DeployApp2Nodes 5.94
228 TestMultiNode/serial/PingHostFrom2Pods 1.1
229 TestMultiNode/serial/AddNode 46.25
230 TestMultiNode/serial/MultiNodeLabels 0.08
231 TestMultiNode/serial/ProfileList 0.52
232 TestMultiNode/serial/CopyFile 7.02
233 TestMultiNode/serial/StopNode 2.69
234 TestMultiNode/serial/StartAfterStop 45.51
235 TestMultiNode/serial/RestartKeepsNodes 318.68
236 TestMultiNode/serial/DeleteNode 2.93
237 TestMultiNode/serial/StopMultiNode 152.76
238 TestMultiNode/serial/RestartMultiNode 94.43
239 TestMultiNode/serial/ValidateNameConflict 45.28
246 TestScheduledStopUnix 116.31
250 TestRunningBinaryUpgrade 175.19
252 TestKubernetesUpgrade 213.76
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.13
259 TestNoKubernetes/serial/StartWithK8s 95.66
264 TestNetworkPlugins/group/false 4.47
268 TestISOImage/Setup 84.54
269 TestNoKubernetes/serial/StartWithStopK8s 35.26
271 TestISOImage/Binaries/crictl 0.21
272 TestISOImage/Binaries/curl 0.19
273 TestISOImage/Binaries/docker 0.21
274 TestISOImage/Binaries/git 0.2
275 TestISOImage/Binaries/iptables 0.21
276 TestISOImage/Binaries/podman 0.2
277 TestISOImage/Binaries/rsync 0.21
278 TestISOImage/Binaries/socat 0.21
279 TestISOImage/Binaries/wget 0.2
280 TestISOImage/Binaries/VBoxControl 0.21
281 TestISOImage/Binaries/VBoxService 0.22
282 TestStoppedBinaryUpgrade/Setup 0.47
283 TestStoppedBinaryUpgrade/Upgrade 146.88
284 TestNoKubernetes/serial/Start 66.52
285 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
286 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
287 TestNoKubernetes/serial/ProfileList 7.01
288 TestNoKubernetes/serial/Stop 1.63
289 TestNoKubernetes/serial/StartNoArgs 57.96
297 TestStoppedBinaryUpgrade/MinikubeLogs 1.34
298 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
300 TestPause/serial/Start 116.86
301 TestNetworkPlugins/group/auto/Start 96.81
303 TestNetworkPlugins/group/kindnet/Start 71.58
304 TestNetworkPlugins/group/calico/Start 81.75
305 TestNetworkPlugins/group/auto/KubeletFlags 0.2
306 TestNetworkPlugins/group/auto/NetCatPod 10.3
307 TestNetworkPlugins/group/kindnet/ControllerPod 6.02
308 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
309 TestNetworkPlugins/group/kindnet/NetCatPod 11.54
310 TestNetworkPlugins/group/auto/DNS 0.24
311 TestNetworkPlugins/group/auto/Localhost 0.19
312 TestNetworkPlugins/group/auto/HairPin 0.23
313 TestNetworkPlugins/group/kindnet/DNS 0.21
314 TestNetworkPlugins/group/kindnet/Localhost 0.19
315 TestNetworkPlugins/group/kindnet/HairPin 0.2
316 TestNetworkPlugins/group/custom-flannel/Start 78.62
317 TestNetworkPlugins/group/enable-default-cni/Start 115.12
318 TestNetworkPlugins/group/calico/ControllerPod 6.01
319 TestNetworkPlugins/group/calico/KubeletFlags 0.27
320 TestNetworkPlugins/group/calico/NetCatPod 23.45
321 TestNetworkPlugins/group/calico/DNS 0.27
322 TestNetworkPlugins/group/calico/Localhost 0.21
323 TestNetworkPlugins/group/calico/HairPin 0.29
324 TestNetworkPlugins/group/flannel/Start 84.32
325 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
326 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.48
327 TestNetworkPlugins/group/bridge/Start 86.88
328 TestNetworkPlugins/group/custom-flannel/DNS 0.2
329 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
330 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
332 TestStartStop/group/old-k8s-version/serial/FirstStart 116.43
333 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
334 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.37
335 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
336 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
337 TestNetworkPlugins/group/enable-default-cni/HairPin 0.23
338 TestNetworkPlugins/group/flannel/ControllerPod 6.01
340 TestStartStop/group/no-preload/serial/FirstStart 116.22
341 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
342 TestNetworkPlugins/group/flannel/NetCatPod 13.29
343 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
344 TestNetworkPlugins/group/bridge/NetCatPod 12.38
345 TestNetworkPlugins/group/flannel/DNS 0.2
346 TestNetworkPlugins/group/flannel/Localhost 0.15
347 TestNetworkPlugins/group/flannel/HairPin 0.14
348 TestNetworkPlugins/group/bridge/DNS 0.21
349 TestNetworkPlugins/group/bridge/Localhost 0.19
350 TestNetworkPlugins/group/bridge/HairPin 0.2
352 TestStartStop/group/embed-certs/serial/FirstStart 66.36
354 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 118.03
355 TestStartStop/group/old-k8s-version/serial/DeployApp 11.47
356 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.49
357 TestStartStop/group/old-k8s-version/serial/Stop 86.87
358 TestStartStop/group/embed-certs/serial/DeployApp 11.4
359 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.46
360 TestStartStop/group/no-preload/serial/DeployApp 9.38
361 TestStartStop/group/embed-certs/serial/Stop 86.62
362 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.16
363 TestStartStop/group/no-preload/serial/Stop 89.28
364 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.43
365 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
366 TestStartStop/group/old-k8s-version/serial/SecondStart 50.55
367 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.26
368 TestStartStop/group/default-k8s-diff-port/serial/Stop 90.77
369 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
370 TestStartStop/group/embed-certs/serial/SecondStart 53.72
371 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
372 TestStartStop/group/no-preload/serial/SecondStart 76.07
373 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 15.01
374 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.11
375 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
376 TestStartStop/group/old-k8s-version/serial/Pause 3.72
378 TestStartStop/group/newest-cni/serial/FirstStart 59.41
379 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 14.01
380 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
381 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 64.12
382 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.14
383 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.42
384 TestStartStop/group/embed-certs/serial/Pause 4.48
395 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 14.01
396 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
397 TestStartStop/group/newest-cni/serial/DeployApp 0
398 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.58
399 TestStartStop/group/newest-cni/serial/Stop 11.13
400 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
401 TestStartStop/group/no-preload/serial/Pause 3.14
402 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
403 TestStartStop/group/newest-cni/serial/SecondStart 40.37
404 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 9.01
405 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
406 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
407 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.38
408 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
409 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
410 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
411 TestStartStop/group/newest-cni/serial/Pause 4.12
x
+
TestDownloadOnly/v1.28.0/json-events (6.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-969818 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-969818 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.560291773s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1109 13:28:55.883923  553473 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1109 13:28:55.884055  553473 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-549598/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-969818
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-969818: exit status 85 (84.986506ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-969818 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-969818 │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 13:28:49
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 13:28:49.383165  553485 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:28:49.383508  553485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:28:49.383521  553485 out.go:374] Setting ErrFile to fd 2...
	I1109 13:28:49.383526  553485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:28:49.383761  553485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
	W1109 13:28:49.383964  553485 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21139-549598/.minikube/config/config.json: open /home/jenkins/minikube-integration/21139-549598/.minikube/config/config.json: no such file or directory
	I1109 13:28:49.384532  553485 out.go:368] Setting JSON to true
	I1109 13:28:49.385594  553485 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":69078,"bootTime":1762625851,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 13:28:49.385734  553485 start.go:143] virtualization: kvm guest
	I1109 13:28:49.388427  553485 out.go:99] [download-only-969818] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 13:28:49.388686  553485 notify.go:221] Checking for updates...
	W1109 13:28:49.388696  553485 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21139-549598/.minikube/cache/preloaded-tarball: no such file or directory
	I1109 13:28:49.390114  553485 out.go:171] MINIKUBE_LOCATION=21139
	I1109 13:28:49.391829  553485 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:28:49.393530  553485 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21139-549598/kubeconfig
	I1109 13:28:49.395138  553485 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 13:28:49.396655  553485 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1109 13:28:49.399178  553485 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1109 13:28:49.399528  553485 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:28:49.442527  553485 out.go:99] Using the kvm2 driver based on user configuration
	I1109 13:28:49.442599  553485 start.go:309] selected driver: kvm2
	I1109 13:28:49.442612  553485 start.go:930] validating driver "kvm2" against <nil>
	I1109 13:28:49.443255  553485 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 13:28:49.443956  553485 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1109 13:28:49.444158  553485 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1109 13:28:49.444201  553485 cni.go:84] Creating CNI manager for ""
	I1109 13:28:49.444273  553485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1109 13:28:49.444290  553485 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1109 13:28:49.444357  553485 start.go:353] cluster config:
	{Name:download-only-969818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-969818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:28:49.444620  553485 iso.go:125] acquiring lock: {Name:mk8f0547b4600c0b2d1e831f024251df19b55199 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 13:28:49.446790  553485 out.go:99] Downloading VM boot image ...
	I1109 13:28:49.446893  553485 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21139-549598/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso
	I1109 13:28:52.305769  553485 out.go:99] Starting "download-only-969818" primary control-plane node in "download-only-969818" cluster
	I1109 13:28:52.305850  553485 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1109 13:28:52.326587  553485 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1109 13:28:52.326633  553485 cache.go:65] Caching tarball of preloaded images
	I1109 13:28:52.326927  553485 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1109 13:28:52.328681  553485 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1109 13:28:52.328715  553485 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1109 13:28:52.355027  553485 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1109 13:28:52.355163  553485 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21139-549598/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-969818 host does not exist
	  To start a cluster, run: "minikube start -p download-only-969818"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-969818
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-045678 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-045678 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.516956538s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1109 13:28:59.856397  553473 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1109 13:28:59.856446  553473 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-549598/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-045678
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-045678: exit status 85 (88.282146ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-969818 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-969818 │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │ 09 Nov 25 13:28 UTC │
	│ delete  │ -p download-only-969818                                                                                                                                                 │ download-only-969818 │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │ 09 Nov 25 13:28 UTC │
	│ start   │ -o=json --download-only -p download-only-045678 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-045678 │ jenkins │ v1.37.0 │ 09 Nov 25 13:28 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 13:28:56
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 13:28:56.404563  553677 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:28:56.404731  553677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:28:56.404744  553677 out.go:374] Setting ErrFile to fd 2...
	I1109 13:28:56.404752  553677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:28:56.404986  553677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
	I1109 13:28:56.405545  553677 out.go:368] Setting JSON to true
	I1109 13:28:56.406512  553677 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":69085,"bootTime":1762625851,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 13:28:56.406637  553677 start.go:143] virtualization: kvm guest
	I1109 13:28:56.408706  553677 out.go:99] [download-only-045678] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 13:28:56.408972  553677 notify.go:221] Checking for updates...
	I1109 13:28:56.410289  553677 out.go:171] MINIKUBE_LOCATION=21139
	I1109 13:28:56.411783  553677 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:28:56.413163  553677 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21139-549598/kubeconfig
	I1109 13:28:56.417572  553677 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 13:28:56.419066  553677 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-045678 host does not exist
	  To start a cluster, run: "minikube start -p download-only-045678"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-045678
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.18s)

                                                
                                    
x
+
TestBinaryMirror (0.71s)

                                                
                                                
=== RUN   TestBinaryMirror
I1109 13:29:00.665846  553473 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-045777 --alsologtostderr --binary-mirror http://127.0.0.1:41935 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-045777" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-045777
--- PASS: TestBinaryMirror (0.71s)

                                                
                                    
x
+
TestOffline (115.28s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-668437 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-668437 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m54.359379338s)
helpers_test.go:175: Cleaning up "offline-crio-668437" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-668437
--- PASS: TestOffline (115.28s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-640912
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-640912: exit status 85 (88.086092ms)

                                                
                                                
-- stdout --
	* Profile "addons-640912" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-640912"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-640912
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-640912: exit status 85 (88.496314ms)

                                                
                                                
-- stdout --
	* Profile "addons-640912" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-640912"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (145.12s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-640912 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-640912 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m25.121780643s)
--- PASS: TestAddons/Setup (145.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-640912 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-640912 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.71s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-640912 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-640912 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [495cc12a-d51f-43be-a567-96a5b4fad03a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [495cc12a-d51f-43be-a567-96a5b4fad03a] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.007858164s
addons_test.go:694: (dbg) Run:  kubectl --context addons-640912 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-640912 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-640912 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.71s)

                                                
                                    
x
+
TestAddons/parallel/Registry (46.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 10.831105ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-wm87r" [59b2fc4f-c09e-47b3-ac30-a3dce9d1d9b1] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.009915854s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-d6q28" [bc34c737-250f-4084-859d-d21c43a619d8] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.011986461s
addons_test.go:392: (dbg) Run:  kubectl --context addons-640912 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-640912 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-640912 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (35.637154349s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-640912 ip
2025/11/09 13:32:31 [DEBUG] GET http://192.168.39.228:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-640912 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (46.67s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.85s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 6.544176ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-640912
addons_test.go:332: (dbg) Run:  kubectl --context addons-640912 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-640912 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.85s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.98s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-r9ztz" [6550b656-ea87-49f5-b19e-c569904851d9] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00444078s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-640912 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-640912 addons disable inspektor-gadget --alsologtostderr -v=1: (5.977259791s)
--- PASS: TestAddons/parallel/InspektorGadget (11.98s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.09s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 5.537039ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-lfl7n" [d4722f0c-b942-4bf8-88e4-a8d5b09f6fdf] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006039247s
addons_test.go:463: (dbg) Run:  kubectl --context addons-640912 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-640912 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-640912 addons disable metrics-server --alsologtostderr -v=1: (1.003756478s)
--- PASS: TestAddons/parallel/MetricsServer (6.09s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (23.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-640912 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-640912 --alsologtostderr -v=1: (1.576555501s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-z7pww" [6a55b049-11fb-4b20-a5b2-243167aee236] Pending
helpers_test.go:352: "headlamp-6945c6f4d-z7pww" [6a55b049-11fb-4b20-a5b2-243167aee236] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-z7pww" [6a55b049-11fb-4b20-a5b2-243167aee236] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.005139268s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-640912 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-640912 addons disable headlamp --alsologtostderr -v=1: (5.986197476s)
--- PASS: TestAddons/parallel/Headlamp (23.57s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-v5gk4" [f3d7af97-5bf1-4694-a516-0560f8fc7a38] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.006594253s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-640912 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.72s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.16s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-vhlxq" [b849210d-a4dd-4bfc-ad75-3bf99c214e37] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.008014074s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-640912 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-640912 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.146215508s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.16s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-p97dd" [f1100c2b-b875-4188-bdc5-965313dbc01b] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00364131s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-640912 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-640912 addons disable yakd --alsologtostderr -v=1: (6.082745355s)
--- PASS: TestAddons/parallel/Yakd (12.09s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (86.1s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-640912
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-640912: (1m25.864485106s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-640912
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-640912
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-640912
--- PASS: TestAddons/StoppedEnableDisable (86.10s)

                                                
                                    
x
+
TestCertOptions (79.02s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-868897 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E1109 14:52:50.458099  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-868897 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m17.428627274s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-868897 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-868897 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-868897 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-868897" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-868897
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-868897: (1.028048109s)
--- PASS: TestCertOptions (79.02s)

                                                
                                    
x
+
TestCertExpiration (336.59s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-729640 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-729640 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m25.087630254s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-729640 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-729640 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m10.453204018s)
helpers_test.go:175: Cleaning up "cert-expiration-729640" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-729640
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-729640: (1.050125664s)
--- PASS: TestCertExpiration (336.59s)

                                                
                                    
x
+
TestForceSystemdFlag (98.59s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-936534 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1109 14:51:27.379170  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-936534 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m37.418777295s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-936534 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-936534" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-936534
--- PASS: TestForceSystemdFlag (98.59s)

                                                
                                    
x
+
TestForceSystemdEnv (49.1s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-849257 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-849257 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (47.953500021s)
helpers_test.go:175: Cleaning up "force-systemd-env-849257" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-849257
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-849257: (1.145555318s)
--- PASS: TestForceSystemdEnv (49.10s)

                                                
                                    
x
+
TestErrorSpam/setup (43.2s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-909540 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-909540 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-909540 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-909540 --driver=kvm2  --container-runtime=crio: (43.196030696s)
--- PASS: TestErrorSpam/setup (43.20s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-909540 --log_dir /tmp/nospam-909540 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-909540 --log_dir /tmp/nospam-909540 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-909540 --log_dir /tmp/nospam-909540 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-909540 --log_dir /tmp/nospam-909540 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-909540 --log_dir /tmp/nospam-909540 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-909540 --log_dir /tmp/nospam-909540 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-909540 --log_dir /tmp/nospam-909540 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-909540 --log_dir /tmp/nospam-909540 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-909540 --log_dir /tmp/nospam-909540 pause
--- PASS: TestErrorSpam/pause (1.86s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.48s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-909540 --log_dir /tmp/nospam-909540 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-909540 --log_dir /tmp/nospam-909540 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-909540 --log_dir /tmp/nospam-909540 unpause
--- PASS: TestErrorSpam/unpause (2.48s)

                                                
                                    
x
+
TestErrorSpam/stop (5.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-909540 --log_dir /tmp/nospam-909540 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-909540 --log_dir /tmp/nospam-909540 stop: (2.535327923s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-909540 --log_dir /tmp/nospam-909540 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-909540 --log_dir /tmp/nospam-909540 stop: (1.370017438s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-909540 --log_dir /tmp/nospam-909540 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-909540 --log_dir /tmp/nospam-909540 stop: (1.610230053s)
--- PASS: TestErrorSpam/stop (5.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21139-549598/.minikube/files/etc/test/nested/copy/553473/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (89.05s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-419649 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-419649 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m29.04907606s)
--- PASS: TestFunctional/serial/StartWithProxy (89.05s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1109 13:44:18.607437  553473 config.go:182] Loaded profile config "functional-419649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-419649 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-419649 --alsologtostderr -v=8: (36.260820682s)
functional_test.go:678: soft start took 36.262137207s for "functional-419649" cluster.
I1109 13:44:54.868745  553473 config.go:182] Loaded profile config "functional-419649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (36.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-419649 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-419649 cache add registry.k8s.io/pause:3.1: (1.341467259s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-419649 cache add registry.k8s.io/pause:3.3: (1.181779852s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-419649 cache add registry.k8s.io/pause:latest: (1.284370313s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-419649 /tmp/TestFunctionalserialCacheCmdcacheadd_local998587703/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 cache add minikube-local-cache-test:functional-419649
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 cache delete minikube-local-cache-test:functional-419649
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-419649
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419649 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (217.207071ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-419649 cache reload: (1.205346516s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 kubectl -- --context functional-419649 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-419649 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-419649 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-419649 logs: (1.821423028s)
--- PASS: TestFunctional/serial/LogsCmd (1.82s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 logs --file /tmp/TestFunctionalserialLogsFileCmd2103949337/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-419649 logs --file /tmp/TestFunctionalserialLogsFileCmd2103949337/001/logs.txt: (1.828092295s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.83s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.16s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-419649 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-419649
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-419649: exit status 115 (288.539512ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.90:30866 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-419649 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.16s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419649 config get cpus: exit status 14 (74.796561ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419649 config get cpus: exit status 14 (100.401073ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-419649 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-419649 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (127.384241ms)

                                                
                                                
-- stdout --
	* [functional-419649] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-549598/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-549598/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:51:39.593708  561543 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:51:39.594052  561543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:51:39.594070  561543 out.go:374] Setting ErrFile to fd 2...
	I1109 13:51:39.594076  561543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:51:39.594438  561543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
	I1109 13:51:39.595204  561543 out.go:368] Setting JSON to false
	I1109 13:51:39.596312  561543 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":70449,"bootTime":1762625851,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 13:51:39.596445  561543 start.go:143] virtualization: kvm guest
	I1109 13:51:39.598891  561543 out.go:179] * [functional-419649] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 13:51:39.600603  561543 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 13:51:39.600630  561543 notify.go:221] Checking for updates...
	I1109 13:51:39.603164  561543 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:51:39.605045  561543 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-549598/kubeconfig
	I1109 13:51:39.606250  561543 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 13:51:39.607579  561543 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 13:51:39.609011  561543 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 13:51:39.610933  561543 config.go:182] Loaded profile config "functional-419649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:51:39.611426  561543 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:51:39.648565  561543 out.go:179] * Using the kvm2 driver based on existing profile
	I1109 13:51:39.649776  561543 start.go:309] selected driver: kvm2
	I1109 13:51:39.649823  561543 start.go:930] validating driver "kvm2" against &{Name:functional-419649 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-419649 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:51:39.649953  561543 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 13:51:39.652360  561543 out.go:203] 
	W1109 13:51:39.653664  561543 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1109 13:51:39.655039  561543 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-419649 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-419649 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-419649 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (133.07102ms)

                                                
                                                
-- stdout --
	* [functional-419649] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-549598/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-549598/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:51:39.860979  561575 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:51:39.861103  561575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:51:39.861110  561575 out.go:374] Setting ErrFile to fd 2...
	I1109 13:51:39.861115  561575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:51:39.861532  561575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
	I1109 13:51:39.862180  561575 out.go:368] Setting JSON to false
	I1109 13:51:39.863220  561575 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":70449,"bootTime":1762625851,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 13:51:39.863355  561575 start.go:143] virtualization: kvm guest
	I1109 13:51:39.865116  561575 out.go:179] * [functional-419649] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1109 13:51:39.866506  561575 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 13:51:39.866542  561575 notify.go:221] Checking for updates...
	I1109 13:51:39.869030  561575 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:51:39.870218  561575 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-549598/kubeconfig
	I1109 13:51:39.871342  561575 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 13:51:39.872675  561575 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 13:51:39.873970  561575 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 13:51:39.875604  561575 config.go:182] Loaded profile config "functional-419649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:51:39.876177  561575 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:51:39.915932  561575 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1109 13:51:39.917245  561575 start.go:309] selected driver: kvm2
	I1109 13:51:39.917274  561575 start.go:930] validating driver "kvm2" against &{Name:functional-419649 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-419649 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:51:39.917426  561575 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 13:51:39.919670  561575 out.go:203] 
	W1109 13:51:39.920739  561575 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1109 13:51:39.921941  561575 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh -n functional-419649 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 cp functional-419649:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1584107693/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh -n functional-419649 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh -n functional-419649 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/553473/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh "sudo cat /etc/test/nested/copy/553473/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/553473.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh "sudo cat /etc/ssl/certs/553473.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/553473.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh "sudo cat /usr/share/ca-certificates/553473.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/5534732.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh "sudo cat /etc/ssl/certs/5534732.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/5534732.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh "sudo cat /usr/share/ca-certificates/5534732.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-419649 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419649 ssh "sudo systemctl is-active docker": exit status 1 (249.517547ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419649 ssh "sudo systemctl is-active containerd": exit status 1 (241.752011ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-419649 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-419649
localhost/kicbase/echo-server:functional-419649
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-419649 image ls --format short --alsologtostderr:
I1109 13:56:43.526575  562902 out.go:360] Setting OutFile to fd 1 ...
I1109 13:56:43.526863  562902 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:56:43.526872  562902 out.go:374] Setting ErrFile to fd 2...
I1109 13:56:43.526876  562902 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:56:43.527102  562902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
I1109 13:56:43.527733  562902 config.go:182] Loaded profile config "functional-419649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:56:43.527880  562902 config.go:182] Loaded profile config "functional-419649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:56:43.530558  562902 ssh_runner.go:195] Run: systemctl --version
I1109 13:56:43.533128  562902 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
I1109 13:56:43.533623  562902 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
I1109 13:56:43.533653  562902 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
I1109 13:56:43.533849  562902 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/functional-419649/id_rsa Username:docker}
I1109 13:56:43.614862  562902 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-419649 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ localhost/minikube-local-cache-test     │ functional-419649  │ 45063f3e2474a │ 3.33kB │
│ localhost/my-image                      │ functional-419649  │ 0421d7399006e │ 1.47MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-419649  │ 9056ab77afb8e │ 4.95MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-419649 image ls --format table --alsologtostderr:
I1109 13:56:47.648031  562968 out.go:360] Setting OutFile to fd 1 ...
I1109 13:56:47.648185  562968 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:56:47.648195  562968 out.go:374] Setting ErrFile to fd 2...
I1109 13:56:47.648199  562968 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:56:47.648403  562968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
I1109 13:56:47.649094  562968 config.go:182] Loaded profile config "functional-419649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:56:47.649213  562968 config.go:182] Loaded profile config "functional-419649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:56:47.651743  562968 ssh_runner.go:195] Run: systemctl --version
I1109 13:56:47.654254  562968 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
I1109 13:56:47.654815  562968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
I1109 13:56:47.654858  562968 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
I1109 13:56:47.655027  562968 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/functional-419649/id_rsa Username:docker}
I1109 13:56:47.736894  562968 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-419649 image ls --format json --alsologtostderr:
[{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"cbcbe49148a41c87bcb5fd06c4ec5717c277a2f92880096e7548e780a245bb81","repoDigests":["docker.io/library/238a02a66f302b200ebae8047017c9ed8433759e30b02c92b59d34806d733e58-tmp@sha256:d4053b5224149ccf73fc4785abb0ef2e47cfbe56d2470df234840a4a1e2de158"],"repoTags":[],"size":"1466018"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"45063f3e2474a85a
bd5e5b7f8c70bc04b97176b904168765e8e815f6a6593b76","repoDigests":["localhost/minikube-local-cache-test@sha256:a5922aeb2598747c765882d7adf08eadc72e24157d087415a6fb81967ca17445"],"repoTags":["localhost/minikube-local-cache-test:functional-419649"],"size":"3330"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b0
13e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d99
65fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicb
ase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-419649"],"size":"4945246"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"
repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"0421d7399006e058415688b422951b492401ae9f738eff6909bf8d87c9201e2f","repoDigests":["localhost/my-image@sha256:417894c4d07595fe126d4523e72eff78ab5440d824d8e2ffaab77e67ccddb987"],"repoTags":["localhost/my-image:functional-419649"],"size":"1468599"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["g
cr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-419649 image ls --format json --alsologtostderr:
I1109 13:56:47.433357  562957 out.go:360] Setting OutFile to fd 1 ...
I1109 13:56:47.433687  562957 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:56:47.433699  562957 out.go:374] Setting ErrFile to fd 2...
I1109 13:56:47.433703  562957 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:56:47.433931  562957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
I1109 13:56:47.434585  562957 config.go:182] Loaded profile config "functional-419649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:56:47.434685  562957 config.go:182] Loaded profile config "functional-419649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:56:47.436916  562957 ssh_runner.go:195] Run: systemctl --version
I1109 13:56:47.439258  562957 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
I1109 13:56:47.439789  562957 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
I1109 13:56:47.439842  562957 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
I1109 13:56:47.440016  562957 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/functional-419649/id_rsa Username:docker}
I1109 13:56:47.521185  562957 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-419649 image ls --format yaml --alsologtostderr:
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 45063f3e2474a85abd5e5b7f8c70bc04b97176b904168765e8e815f6a6593b76
repoDigests:
- localhost/minikube-local-cache-test@sha256:a5922aeb2598747c765882d7adf08eadc72e24157d087415a6fb81967ca17445
repoTags:
- localhost/minikube-local-cache-test:functional-419649
size: "3330"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-419649
size: "4945246"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-419649 image ls --format yaml --alsologtostderr:
I1109 13:56:43.732739  562913 out.go:360] Setting OutFile to fd 1 ...
I1109 13:56:43.733091  562913 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:56:43.733105  562913 out.go:374] Setting ErrFile to fd 2...
I1109 13:56:43.733109  562913 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:56:43.733312  562913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
I1109 13:56:43.733962  562913 config.go:182] Loaded profile config "functional-419649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:56:43.734049  562913 config.go:182] Loaded profile config "functional-419649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:56:43.736371  562913 ssh_runner.go:195] Run: systemctl --version
I1109 13:56:43.739103  562913 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
I1109 13:56:43.739598  562913 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
I1109 13:56:43.739636  562913 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
I1109 13:56:43.739886  562913 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/functional-419649/id_rsa Username:docker}
I1109 13:56:43.824905  562913 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419649 ssh pgrep buildkitd: exit status 1 (178.902685ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 image build -t localhost/my-image:functional-419649 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-419649 image build -t localhost/my-image:functional-419649 testdata/build --alsologtostderr: (3.085061987s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-419649 image build -t localhost/my-image:functional-419649 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> cbcbe49148a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-419649
--> 0421d739900
Successfully tagged localhost/my-image:functional-419649
0421d7399006e058415688b422951b492401ae9f738eff6909bf8d87c9201e2f
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-419649 image build -t localhost/my-image:functional-419649 testdata/build --alsologtostderr:
I1109 13:56:44.124623  562935 out.go:360] Setting OutFile to fd 1 ...
I1109 13:56:44.124764  562935 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:56:44.124778  562935 out.go:374] Setting ErrFile to fd 2...
I1109 13:56:44.124784  562935 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:56:44.125033  562935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
I1109 13:56:44.125621  562935 config.go:182] Loaded profile config "functional-419649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:56:44.126948  562935 config.go:182] Loaded profile config "functional-419649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:56:44.129564  562935 ssh_runner.go:195] Run: systemctl --version
I1109 13:56:44.132152  562935 main.go:143] libmachine: domain functional-419649 has defined MAC address 52:54:00:73:87:3e in network mk-functional-419649
I1109 13:56:44.132616  562935 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:87:3e", ip: ""} in network mk-functional-419649: {Iface:virbr1 ExpiryTime:2025-11-09 14:43:07 +0000 UTC Type:0 Mac:52:54:00:73:87:3e Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:functional-419649 Clientid:01:52:54:00:73:87:3e}
I1109 13:56:44.132650  562935 main.go:143] libmachine: domain functional-419649 has defined IP address 192.168.39.90 and MAC address 52:54:00:73:87:3e in network mk-functional-419649
I1109 13:56:44.132822  562935 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/functional-419649/id_rsa Username:docker}
I1109 13:56:44.221238  562935 build_images.go:162] Building image from path: /tmp/build.4140376001.tar
I1109 13:56:44.221329  562935 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1109 13:56:44.237115  562935 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4140376001.tar
I1109 13:56:44.243523  562935 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4140376001.tar: stat -c "%s %y" /var/lib/minikube/build/build.4140376001.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4140376001.tar': No such file or directory
I1109 13:56:44.243566  562935 ssh_runner.go:362] scp /tmp/build.4140376001.tar --> /var/lib/minikube/build/build.4140376001.tar (3072 bytes)
I1109 13:56:44.283833  562935 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4140376001
I1109 13:56:44.300314  562935 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4140376001 -xf /var/lib/minikube/build/build.4140376001.tar
I1109 13:56:44.314889  562935 crio.go:315] Building image: /var/lib/minikube/build/build.4140376001
I1109 13:56:44.315088  562935 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-419649 /var/lib/minikube/build/build.4140376001 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1109 13:56:47.105360  562935 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-419649 /var/lib/minikube/build/build.4140376001 --cgroup-manager=cgroupfs: (2.790239777s)
I1109 13:56:47.105431  562935 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4140376001
I1109 13:56:47.125074  562935 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4140376001.tar
I1109 13:56:47.140453  562935 build_images.go:218] Built localhost/my-image:functional-419649 from /tmp/build.4140376001.tar
I1109 13:56:47.140508  562935 build_images.go:134] succeeded building to: functional-419649
I1109 13:56:47.140513  562935 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-419649
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 image load --daemon kicbase/echo-server:functional-419649 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-419649 image load --daemon kicbase/echo-server:functional-419649 --alsologtostderr: (1.67504217s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 image load --daemon kicbase/echo-server:functional-419649 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-419649
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 image load --daemon kicbase/echo-server:functional-419649 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 image save kicbase/echo-server:functional-419649 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 image rm kicbase/echo-server:functional-419649 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-419649
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 image save --daemon kicbase/echo-server:functional-419649 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-419649
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "269.737024ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "72.352703ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "293.420397ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "84.251996ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (36.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-419649 /tmp/TestFunctionalparallelMountCmdany-port910271744/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1762696275541277107" to /tmp/TestFunctionalparallelMountCmdany-port910271744/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1762696275541277107" to /tmp/TestFunctionalparallelMountCmdany-port910271744/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1762696275541277107" to /tmp/TestFunctionalparallelMountCmdany-port910271744/001/test-1762696275541277107
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh "findmnt -T /mount-9p | grep 9p"
I1109 13:51:15.751316  553473 detect.go:223] nested VM detected
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419649 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (253.483227ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1109 13:51:15.795146  553473 retry.go:31] will retry after 350.611222ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  9 13:51 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  9 13:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  9 13:51 test-1762696275541277107
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh cat /mount-9p/test-1762696275541277107
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-419649 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [a19c92c2-78f7-4060-ac8a-b2554d1b04cb] Pending
helpers_test.go:352: "busybox-mount" [a19c92c2-78f7-4060-ac8a-b2554d1b04cb] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [a19c92c2-78f7-4060-ac8a-b2554d1b04cb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [a19c92c2-78f7-4060-ac8a-b2554d1b04cb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 34.006761388s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-419649 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-419649 /tmp/TestFunctionalparallelMountCmdany-port910271744/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (36.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-419649 /tmp/TestFunctionalparallelMountCmdspecific-port2962691435/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419649 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (175.588443ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1109 13:51:51.771125  553473 retry.go:31] will retry after 407.094618ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-419649 /tmp/TestFunctionalparallelMountCmdspecific-port2962691435/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419649 ssh "sudo umount -f /mount-9p": exit status 1 (169.848605ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-419649 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-419649 /tmp/TestFunctionalparallelMountCmdspecific-port2962691435/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-419649 /tmp/TestFunctionalparallelMountCmdVerifyCleanup522304696/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-419649 /tmp/TestFunctionalparallelMountCmdVerifyCleanup522304696/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-419649 /tmp/TestFunctionalparallelMountCmdVerifyCleanup522304696/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419649 ssh "findmnt -T" /mount1: exit status 1 (202.074991ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1109 13:51:53.109871  553473 retry.go:31] will retry after 602.159655ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-419649 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-419649 /tmp/TestFunctionalparallelMountCmdVerifyCleanup522304696/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-419649 /tmp/TestFunctionalparallelMountCmdVerifyCleanup522304696/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-419649 /tmp/TestFunctionalparallelMountCmdVerifyCleanup522304696/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-419649 service list: (1.216194234s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-419649 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-419649 service list -o json: (1.218351049s)
functional_test.go:1504: Took "1.218467166s" to run "out/minikube-linux-amd64 -p functional-419649 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.22s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-419649
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-419649
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-419649
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (259.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1109 14:02:50.448809  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:06:07.145638  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:06:07.152286  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:06:07.163897  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:06:07.185511  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:06:07.227110  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:06:07.308766  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:06:07.470688  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:06:07.792602  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:06:08.434778  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:06:09.716444  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:06:12.278752  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:06:17.400628  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-451786 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (4m19.241888038s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (259.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-451786 kubectl -- rollout status deployment/busybox: (4.659779284s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 kubectl -- exec busybox-7b57f96db7-fkrd4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 kubectl -- exec busybox-7b57f96db7-jndk9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 kubectl -- exec busybox-7b57f96db7-xg7cr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 kubectl -- exec busybox-7b57f96db7-fkrd4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 kubectl -- exec busybox-7b57f96db7-jndk9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 kubectl -- exec busybox-7b57f96db7-xg7cr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 kubectl -- exec busybox-7b57f96db7-fkrd4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 kubectl -- exec busybox-7b57f96db7-jndk9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 kubectl -- exec busybox-7b57f96db7-xg7cr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 kubectl -- exec busybox-7b57f96db7-fkrd4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 kubectl -- exec busybox-7b57f96db7-fkrd4 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 kubectl -- exec busybox-7b57f96db7-jndk9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 kubectl -- exec busybox-7b57f96db7-jndk9 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 kubectl -- exec busybox-7b57f96db7-xg7cr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E1109 14:06:27.379213  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 kubectl -- exec busybox-7b57f96db7-xg7cr -- sh -c "ping -c 1 192.168.39.1"
E1109 14:06:27.642979  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (50.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 node add --alsologtostderr -v 5
E1109 14:06:48.125036  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-451786 node add --alsologtostderr -v 5: (49.844439174s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (50.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-451786 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 cp testdata/cp-test.txt ha-451786:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 cp ha-451786:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile851607760/001/cp-test_ha-451786.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 cp ha-451786:/home/docker/cp-test.txt ha-451786-m02:/home/docker/cp-test_ha-451786_ha-451786-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786-m02 "sudo cat /home/docker/cp-test_ha-451786_ha-451786-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 cp ha-451786:/home/docker/cp-test.txt ha-451786-m03:/home/docker/cp-test_ha-451786_ha-451786-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786-m03 "sudo cat /home/docker/cp-test_ha-451786_ha-451786-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 cp ha-451786:/home/docker/cp-test.txt ha-451786-m04:/home/docker/cp-test_ha-451786_ha-451786-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786-m04 "sudo cat /home/docker/cp-test_ha-451786_ha-451786-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 cp testdata/cp-test.txt ha-451786-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 cp ha-451786-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile851607760/001/cp-test_ha-451786-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 cp ha-451786-m02:/home/docker/cp-test.txt ha-451786:/home/docker/cp-test_ha-451786-m02_ha-451786.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786 "sudo cat /home/docker/cp-test_ha-451786-m02_ha-451786.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 cp ha-451786-m02:/home/docker/cp-test.txt ha-451786-m03:/home/docker/cp-test_ha-451786-m02_ha-451786-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786-m03 "sudo cat /home/docker/cp-test_ha-451786-m02_ha-451786-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 cp ha-451786-m02:/home/docker/cp-test.txt ha-451786-m04:/home/docker/cp-test_ha-451786-m02_ha-451786-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786-m04 "sudo cat /home/docker/cp-test_ha-451786-m02_ha-451786-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 cp testdata/cp-test.txt ha-451786-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 cp ha-451786-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile851607760/001/cp-test_ha-451786-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 cp ha-451786-m03:/home/docker/cp-test.txt ha-451786:/home/docker/cp-test_ha-451786-m03_ha-451786.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786 "sudo cat /home/docker/cp-test_ha-451786-m03_ha-451786.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 cp ha-451786-m03:/home/docker/cp-test.txt ha-451786-m02:/home/docker/cp-test_ha-451786-m03_ha-451786-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786-m02 "sudo cat /home/docker/cp-test_ha-451786-m03_ha-451786-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 cp ha-451786-m03:/home/docker/cp-test.txt ha-451786-m04:/home/docker/cp-test_ha-451786-m03_ha-451786-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786-m04 "sudo cat /home/docker/cp-test_ha-451786-m03_ha-451786-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 cp testdata/cp-test.txt ha-451786-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786-m04 "sudo cat /home/docker/cp-test.txt"
E1109 14:07:29.087000  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 cp ha-451786-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile851607760/001/cp-test_ha-451786-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 cp ha-451786-m04:/home/docker/cp-test.txt ha-451786:/home/docker/cp-test_ha-451786-m04_ha-451786.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786 "sudo cat /home/docker/cp-test_ha-451786-m04_ha-451786.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 cp ha-451786-m04:/home/docker/cp-test.txt ha-451786-m02:/home/docker/cp-test_ha-451786-m04_ha-451786-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786-m02 "sudo cat /home/docker/cp-test_ha-451786-m04_ha-451786-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 cp ha-451786-m04:/home/docker/cp-test.txt ha-451786-m03:/home/docker/cp-test_ha-451786-m04_ha-451786-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 ssh -n ha-451786-m03 "sudo cat /home/docker/cp-test_ha-451786-m04_ha-451786-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (81.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 node stop m02 --alsologtostderr -v 5
E1109 14:08:51.010196  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-451786 node stop m02 --alsologtostderr -v 5: (1m20.550417959s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-451786 status --alsologtostderr -v 5: exit status 7 (601.491372ms)

                                                
                                                
-- stdout --
	ha-451786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-451786-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-451786-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-451786-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:08:52.458672  567539 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:08:52.458790  567539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:08:52.458823  567539 out.go:374] Setting ErrFile to fd 2...
	I1109 14:08:52.458830  567539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:08:52.459069  567539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
	I1109 14:08:52.459267  567539 out.go:368] Setting JSON to false
	I1109 14:08:52.459299  567539 mustload.go:66] Loading cluster: ha-451786
	I1109 14:08:52.459428  567539 notify.go:221] Checking for updates...
	I1109 14:08:52.459846  567539 config.go:182] Loaded profile config "ha-451786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:08:52.459890  567539 status.go:174] checking status of ha-451786 ...
	I1109 14:08:52.462734  567539 status.go:371] ha-451786 host status = "Running" (err=<nil>)
	I1109 14:08:52.462772  567539 host.go:66] Checking if "ha-451786" exists ...
	I1109 14:08:52.466745  567539 main.go:143] libmachine: domain ha-451786 has defined MAC address 52:54:00:71:2c:8d in network mk-ha-451786
	I1109 14:08:52.467373  567539 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:71:2c:8d", ip: ""} in network mk-ha-451786: {Iface:virbr1 ExpiryTime:2025-11-09 15:02:16 +0000 UTC Type:0 Mac:52:54:00:71:2c:8d Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-451786 Clientid:01:52:54:00:71:2c:8d}
	I1109 14:08:52.467408  567539 main.go:143] libmachine: domain ha-451786 has defined IP address 192.168.39.162 and MAC address 52:54:00:71:2c:8d in network mk-ha-451786
	I1109 14:08:52.467598  567539 host.go:66] Checking if "ha-451786" exists ...
	I1109 14:08:52.467965  567539 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:08:52.471505  567539 main.go:143] libmachine: domain ha-451786 has defined MAC address 52:54:00:71:2c:8d in network mk-ha-451786
	I1109 14:08:52.472082  567539 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:71:2c:8d", ip: ""} in network mk-ha-451786: {Iface:virbr1 ExpiryTime:2025-11-09 15:02:16 +0000 UTC Type:0 Mac:52:54:00:71:2c:8d Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:ha-451786 Clientid:01:52:54:00:71:2c:8d}
	I1109 14:08:52.472136  567539 main.go:143] libmachine: domain ha-451786 has defined IP address 192.168.39.162 and MAC address 52:54:00:71:2c:8d in network mk-ha-451786
	I1109 14:08:52.472377  567539 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/ha-451786/id_rsa Username:docker}
	I1109 14:08:52.564942  567539 ssh_runner.go:195] Run: systemctl --version
	I1109 14:08:52.574532  567539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:08:52.598489  567539 kubeconfig.go:125] found "ha-451786" server: "https://192.168.39.254:8443"
	I1109 14:08:52.598535  567539 api_server.go:166] Checking apiserver status ...
	I1109 14:08:52.598573  567539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:08:52.625074  567539 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1384/cgroup
	W1109 14:08:52.639147  567539 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1384/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:08:52.639230  567539 ssh_runner.go:195] Run: ls
	I1109 14:08:52.646441  567539 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1109 14:08:52.653065  567539 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1109 14:08:52.653096  567539 status.go:463] ha-451786 apiserver status = Running (err=<nil>)
	I1109 14:08:52.653108  567539 status.go:176] ha-451786 status: &{Name:ha-451786 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 14:08:52.653130  567539 status.go:174] checking status of ha-451786-m02 ...
	I1109 14:08:52.655120  567539 status.go:371] ha-451786-m02 host status = "Stopped" (err=<nil>)
	I1109 14:08:52.655159  567539 status.go:384] host is not running, skipping remaining checks
	I1109 14:08:52.655170  567539 status.go:176] ha-451786-m02 status: &{Name:ha-451786-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 14:08:52.655192  567539 status.go:174] checking status of ha-451786-m03 ...
	I1109 14:08:52.656409  567539 status.go:371] ha-451786-m03 host status = "Running" (err=<nil>)
	I1109 14:08:52.656435  567539 host.go:66] Checking if "ha-451786-m03" exists ...
	I1109 14:08:52.658731  567539 main.go:143] libmachine: domain ha-451786-m03 has defined MAC address 52:54:00:cf:99:47 in network mk-ha-451786
	I1109 14:08:52.659152  567539 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cf:99:47", ip: ""} in network mk-ha-451786: {Iface:virbr1 ExpiryTime:2025-11-09 15:04:32 +0000 UTC Type:0 Mac:52:54:00:cf:99:47 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:ha-451786-m03 Clientid:01:52:54:00:cf:99:47}
	I1109 14:08:52.659181  567539 main.go:143] libmachine: domain ha-451786-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:cf:99:47 in network mk-ha-451786
	I1109 14:08:52.659358  567539 host.go:66] Checking if "ha-451786-m03" exists ...
	I1109 14:08:52.659620  567539 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:08:52.661946  567539 main.go:143] libmachine: domain ha-451786-m03 has defined MAC address 52:54:00:cf:99:47 in network mk-ha-451786
	I1109 14:08:52.662335  567539 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cf:99:47", ip: ""} in network mk-ha-451786: {Iface:virbr1 ExpiryTime:2025-11-09 15:04:32 +0000 UTC Type:0 Mac:52:54:00:cf:99:47 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:ha-451786-m03 Clientid:01:52:54:00:cf:99:47}
	I1109 14:08:52.662359  567539 main.go:143] libmachine: domain ha-451786-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:cf:99:47 in network mk-ha-451786
	I1109 14:08:52.662499  567539 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/ha-451786-m03/id_rsa Username:docker}
	I1109 14:08:52.763061  567539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:08:52.792035  567539 kubeconfig.go:125] found "ha-451786" server: "https://192.168.39.254:8443"
	I1109 14:08:52.792068  567539 api_server.go:166] Checking apiserver status ...
	I1109 14:08:52.792104  567539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:08:52.819630  567539 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1820/cgroup
	W1109 14:08:52.836671  567539 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1820/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:08:52.836788  567539 ssh_runner.go:195] Run: ls
	I1109 14:08:52.847664  567539 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1109 14:08:52.854238  567539 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1109 14:08:52.854281  567539 status.go:463] ha-451786-m03 apiserver status = Running (err=<nil>)
	I1109 14:08:52.854293  567539 status.go:176] ha-451786-m03 status: &{Name:ha-451786-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 14:08:52.854327  567539 status.go:174] checking status of ha-451786-m04 ...
	I1109 14:08:52.856852  567539 status.go:371] ha-451786-m04 host status = "Running" (err=<nil>)
	I1109 14:08:52.856887  567539 host.go:66] Checking if "ha-451786-m04" exists ...
	I1109 14:08:52.860571  567539 main.go:143] libmachine: domain ha-451786-m04 has defined MAC address 52:54:00:08:1a:9c in network mk-ha-451786
	I1109 14:08:52.861183  567539 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:08:1a:9c", ip: ""} in network mk-ha-451786: {Iface:virbr1 ExpiryTime:2025-11-09 15:06:46 +0000 UTC Type:0 Mac:52:54:00:08:1a:9c Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-451786-m04 Clientid:01:52:54:00:08:1a:9c}
	I1109 14:08:52.861229  567539 main.go:143] libmachine: domain ha-451786-m04 has defined IP address 192.168.39.78 and MAC address 52:54:00:08:1a:9c in network mk-ha-451786
	I1109 14:08:52.861448  567539 host.go:66] Checking if "ha-451786-m04" exists ...
	I1109 14:08:52.861759  567539 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:08:52.865495  567539 main.go:143] libmachine: domain ha-451786-m04 has defined MAC address 52:54:00:08:1a:9c in network mk-ha-451786
	I1109 14:08:52.866305  567539 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:08:1a:9c", ip: ""} in network mk-ha-451786: {Iface:virbr1 ExpiryTime:2025-11-09 15:06:46 +0000 UTC Type:0 Mac:52:54:00:08:1a:9c Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-451786-m04 Clientid:01:52:54:00:08:1a:9c}
	I1109 14:08:52.866342  567539 main.go:143] libmachine: domain ha-451786-m04 has defined IP address 192.168.39.78 and MAC address 52:54:00:08:1a:9c in network mk-ha-451786
	I1109 14:08:52.866635  567539 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/ha-451786-m04/id_rsa Username:docker}
	I1109 14:08:52.963151  567539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:08:52.987479  567539 status.go:176] ha-451786-m04 status: &{Name:ha-451786-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (81.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (45.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-451786 node start m02 --alsologtostderr -v 5: (44.263290262s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-451786 status --alsologtostderr -v 5: (1.087184971s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (45.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.041560419s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (398.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 stop --alsologtostderr -v 5
E1109 14:11:07.145288  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:11:27.378997  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:11:34.852399  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-451786 stop --alsologtostderr -v 5: (4m16.735713264s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 start --wait true --alsologtostderr -v 5
E1109 14:16:07.145888  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-451786 start --wait true --alsologtostderr -v 5: (2m21.038052326s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (398.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 node delete m03 --alsologtostderr -v 5
E1109 14:16:27.378787  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-451786 node delete m03 --alsologtostderr -v 5: (18.050804748s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (230.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 stop --alsologtostderr -v 5
E1109 14:19:30.450558  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-451786 stop --alsologtostderr -v 5: (3m50.489056963s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-451786 status --alsologtostderr -v 5: exit status 7 (75.402133ms)

                                                
                                                
-- stdout --
	ha-451786
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-451786-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-451786-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:20:28.034740  571273 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:20:28.035079  571273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:20:28.035089  571273 out.go:374] Setting ErrFile to fd 2...
	I1109 14:20:28.035107  571273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:20:28.035325  571273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
	I1109 14:20:28.035515  571273 out.go:368] Setting JSON to false
	I1109 14:20:28.035554  571273 mustload.go:66] Loading cluster: ha-451786
	I1109 14:20:28.035689  571273 notify.go:221] Checking for updates...
	I1109 14:20:28.035984  571273 config.go:182] Loaded profile config "ha-451786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:20:28.036006  571273 status.go:174] checking status of ha-451786 ...
	I1109 14:20:28.038601  571273 status.go:371] ha-451786 host status = "Stopped" (err=<nil>)
	I1109 14:20:28.038634  571273 status.go:384] host is not running, skipping remaining checks
	I1109 14:20:28.038642  571273 status.go:176] ha-451786 status: &{Name:ha-451786 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 14:20:28.038667  571273 status.go:174] checking status of ha-451786-m02 ...
	I1109 14:20:28.040088  571273 status.go:371] ha-451786-m02 host status = "Stopped" (err=<nil>)
	I1109 14:20:28.040111  571273 status.go:384] host is not running, skipping remaining checks
	I1109 14:20:28.040117  571273 status.go:176] ha-451786-m02 status: &{Name:ha-451786-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 14:20:28.040134  571273 status.go:174] checking status of ha-451786-m04 ...
	I1109 14:20:28.041385  571273 status.go:371] ha-451786-m04 host status = "Stopped" (err=<nil>)
	I1109 14:20:28.041411  571273 status.go:384] host is not running, skipping remaining checks
	I1109 14:20:28.041416  571273 status.go:176] ha-451786-m04 status: &{Name:ha-451786-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (230.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (115.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1109 14:21:07.146032  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:21:27.379035  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-451786 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m54.833401391s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (115.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (84.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 node add --control-plane --alsologtostderr -v 5
E1109 14:22:30.214632  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-451786 node add --control-plane --alsologtostderr -v 5: (1m23.236080769s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-451786 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (84.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.79s)

                                                
                                    
x
+
TestJSONOutput/start/Command (89.05s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-607771 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-607771 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m29.048895549s)
--- PASS: TestJSONOutput/start/Command (89.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.87s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-607771 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.87s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-607771 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.28s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-607771 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-607771 --output=json --user=testUser: (7.275361206s)
--- PASS: TestJSONOutput/stop/Command (7.28s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.29s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-164826 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-164826 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (98.409773ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"de9a2faf-8bd7-4acf-85e7-d33126d2616f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-164826] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d139b7fe-5219-4df0-9bf7-55c08850b8a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21139"}}
	{"specversion":"1.0","id":"7a3bfc67-3a89-4ba0-aa73-99fe8b6ffa36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a3f26bb6-b038-4e41-9147-868b7b139f56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21139-549598/kubeconfig"}}
	{"specversion":"1.0","id":"fe7de1b5-3f3e-4f0c-bed9-669413b9e0ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-549598/.minikube"}}
	{"specversion":"1.0","id":"34bfce12-36fe-495f-90e1-13b8ad52c992","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e87328e7-6669-4063-bcd8-0577b57ad6ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c114b064-7f52-4aee-82f7-60b458b0b6f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-164826" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-164826
--- PASS: TestErrorJSONOutput (0.29s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (96.17s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-709124 --driver=kvm2  --container-runtime=crio
E1109 14:26:07.153637  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-709124 --driver=kvm2  --container-runtime=crio: (46.52246343s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-712575 --driver=kvm2  --container-runtime=crio
E1109 14:26:27.378518  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-712575 --driver=kvm2  --container-runtime=crio: (46.683717844s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-709124
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-712575
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-712575" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-712575
helpers_test.go:175: Cleaning up "first-709124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-709124
--- PASS: TestMinikubeProfile (96.17s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (25.07s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-566675 --memory=3072 --mount-string /tmp/TestMountStartserial91296022/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-566675 --memory=3072 --mount-string /tmp/TestMountStartserial91296022/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.071914561s)
--- PASS: TestMountStart/serial/StartWithMountFirst (25.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-566675 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-566675 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-593323 --memory=3072 --mount-string /tmp/TestMountStartserial91296022/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-593323 --memory=3072 --mount-string /tmp/TestMountStartserial91296022/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.553362404s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-593323 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-593323 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.34s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.79s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-566675 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-593323 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-593323 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.51s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-593323
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-593323: (1.510910676s)
--- PASS: TestMountStart/serial/Stop (1.51s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.89s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-593323
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-593323: (21.887671402s)
--- PASS: TestMountStart/serial/RestartStopped (22.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-593323 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-593323 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (111.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-570915 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-570915 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m51.291839071s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (111.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-570915 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-570915 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-570915 -- rollout status deployment/busybox: (3.954608646s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-570915 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-570915 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-570915 -- exec busybox-7b57f96db7-8hrrf -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-570915 -- exec busybox-7b57f96db7-jd5vj -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-570915 -- exec busybox-7b57f96db7-8hrrf -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-570915 -- exec busybox-7b57f96db7-jd5vj -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-570915 -- exec busybox-7b57f96db7-8hrrf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-570915 -- exec busybox-7b57f96db7-jd5vj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.94s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-570915 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-570915 -- exec busybox-7b57f96db7-8hrrf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-570915 -- exec busybox-7b57f96db7-8hrrf -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-570915 -- exec busybox-7b57f96db7-jd5vj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-570915 -- exec busybox-7b57f96db7-jd5vj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.10s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-570915 -v=5 --alsologtostderr
E1109 14:31:07.146488  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-570915 -v=5 --alsologtostderr: (45.71129609s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.25s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-570915 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.52s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 cp testdata/cp-test.txt multinode-570915:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 ssh -n multinode-570915 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 cp multinode-570915:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3137396260/001/cp-test_multinode-570915.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 ssh -n multinode-570915 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 cp multinode-570915:/home/docker/cp-test.txt multinode-570915-m02:/home/docker/cp-test_multinode-570915_multinode-570915-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 ssh -n multinode-570915 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 ssh -n multinode-570915-m02 "sudo cat /home/docker/cp-test_multinode-570915_multinode-570915-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 cp multinode-570915:/home/docker/cp-test.txt multinode-570915-m03:/home/docker/cp-test_multinode-570915_multinode-570915-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 ssh -n multinode-570915 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 ssh -n multinode-570915-m03 "sudo cat /home/docker/cp-test_multinode-570915_multinode-570915-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 cp testdata/cp-test.txt multinode-570915-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 ssh -n multinode-570915-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 cp multinode-570915-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3137396260/001/cp-test_multinode-570915-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 ssh -n multinode-570915-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 cp multinode-570915-m02:/home/docker/cp-test.txt multinode-570915:/home/docker/cp-test_multinode-570915-m02_multinode-570915.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 ssh -n multinode-570915-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 ssh -n multinode-570915 "sudo cat /home/docker/cp-test_multinode-570915-m02_multinode-570915.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 cp multinode-570915-m02:/home/docker/cp-test.txt multinode-570915-m03:/home/docker/cp-test_multinode-570915-m02_multinode-570915-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 ssh -n multinode-570915-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 ssh -n multinode-570915-m03 "sudo cat /home/docker/cp-test_multinode-570915-m02_multinode-570915-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 cp testdata/cp-test.txt multinode-570915-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 ssh -n multinode-570915-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 cp multinode-570915-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3137396260/001/cp-test_multinode-570915-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 ssh -n multinode-570915-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 cp multinode-570915-m03:/home/docker/cp-test.txt multinode-570915:/home/docker/cp-test_multinode-570915-m03_multinode-570915.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 ssh -n multinode-570915-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 ssh -n multinode-570915 "sudo cat /home/docker/cp-test_multinode-570915-m03_multinode-570915.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 cp multinode-570915-m03:/home/docker/cp-test.txt multinode-570915-m02:/home/docker/cp-test_multinode-570915-m03_multinode-570915-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 ssh -n multinode-570915-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 ssh -n multinode-570915-m02 "sudo cat /home/docker/cp-test_multinode-570915-m03_multinode-570915-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.02s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-570915 node stop m03: (1.903840841s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-570915 status: exit status 7 (380.945857ms)

                                                
                                                
-- stdout --
	multinode-570915
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-570915-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-570915-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-570915 status --alsologtostderr: exit status 7 (402.689044ms)

                                                
                                                
-- stdout --
	multinode-570915
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-570915-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-570915-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:31:19.808135  577127 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:31:19.808436  577127 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:31:19.808446  577127 out.go:374] Setting ErrFile to fd 2...
	I1109 14:31:19.808450  577127 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:31:19.808702  577127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
	I1109 14:31:19.808904  577127 out.go:368] Setting JSON to false
	I1109 14:31:19.808946  577127 mustload.go:66] Loading cluster: multinode-570915
	I1109 14:31:19.809055  577127 notify.go:221] Checking for updates...
	I1109 14:31:19.809536  577127 config.go:182] Loaded profile config "multinode-570915": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:31:19.809562  577127 status.go:174] checking status of multinode-570915 ...
	I1109 14:31:19.812208  577127 status.go:371] multinode-570915 host status = "Running" (err=<nil>)
	I1109 14:31:19.812243  577127 host.go:66] Checking if "multinode-570915" exists ...
	I1109 14:31:19.815127  577127 main.go:143] libmachine: domain multinode-570915 has defined MAC address 52:54:00:1c:06:ea in network mk-multinode-570915
	I1109 14:31:19.815718  577127 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1c:06:ea", ip: ""} in network mk-multinode-570915: {Iface:virbr1 ExpiryTime:2025-11-09 15:28:43 +0000 UTC Type:0 Mac:52:54:00:1c:06:ea Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-570915 Clientid:01:52:54:00:1c:06:ea}
	I1109 14:31:19.815756  577127 main.go:143] libmachine: domain multinode-570915 has defined IP address 192.168.39.88 and MAC address 52:54:00:1c:06:ea in network mk-multinode-570915
	I1109 14:31:19.816021  577127 host.go:66] Checking if "multinode-570915" exists ...
	I1109 14:31:19.816383  577127 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:31:19.819160  577127 main.go:143] libmachine: domain multinode-570915 has defined MAC address 52:54:00:1c:06:ea in network mk-multinode-570915
	I1109 14:31:19.820089  577127 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1c:06:ea", ip: ""} in network mk-multinode-570915: {Iface:virbr1 ExpiryTime:2025-11-09 15:28:43 +0000 UTC Type:0 Mac:52:54:00:1c:06:ea Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-570915 Clientid:01:52:54:00:1c:06:ea}
	I1109 14:31:19.820207  577127 main.go:143] libmachine: domain multinode-570915 has defined IP address 192.168.39.88 and MAC address 52:54:00:1c:06:ea in network mk-multinode-570915
	I1109 14:31:19.820589  577127 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/multinode-570915/id_rsa Username:docker}
	I1109 14:31:19.917614  577127 ssh_runner.go:195] Run: systemctl --version
	I1109 14:31:19.928677  577127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:31:19.958387  577127 kubeconfig.go:125] found "multinode-570915" server: "https://192.168.39.88:8443"
	I1109 14:31:19.958434  577127 api_server.go:166] Checking apiserver status ...
	I1109 14:31:19.958482  577127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:31:19.984279  577127 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1403/cgroup
	W1109 14:31:20.002489  577127 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1403/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:31:20.002565  577127 ssh_runner.go:195] Run: ls
	I1109 14:31:20.010387  577127 api_server.go:253] Checking apiserver healthz at https://192.168.39.88:8443/healthz ...
	I1109 14:31:20.016161  577127 api_server.go:279] https://192.168.39.88:8443/healthz returned 200:
	ok
	I1109 14:31:20.016197  577127 status.go:463] multinode-570915 apiserver status = Running (err=<nil>)
	I1109 14:31:20.016210  577127 status.go:176] multinode-570915 status: &{Name:multinode-570915 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 14:31:20.016233  577127 status.go:174] checking status of multinode-570915-m02 ...
	I1109 14:31:20.018004  577127 status.go:371] multinode-570915-m02 host status = "Running" (err=<nil>)
	I1109 14:31:20.018033  577127 host.go:66] Checking if "multinode-570915-m02" exists ...
	I1109 14:31:20.021158  577127 main.go:143] libmachine: domain multinode-570915-m02 has defined MAC address 52:54:00:6f:bc:84 in network mk-multinode-570915
	I1109 14:31:20.022139  577127 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6f:bc:84", ip: ""} in network mk-multinode-570915: {Iface:virbr1 ExpiryTime:2025-11-09 15:29:46 +0000 UTC Type:0 Mac:52:54:00:6f:bc:84 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-570915-m02 Clientid:01:52:54:00:6f:bc:84}
	I1109 14:31:20.022184  577127 main.go:143] libmachine: domain multinode-570915-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:6f:bc:84 in network mk-multinode-570915
	I1109 14:31:20.022581  577127 host.go:66] Checking if "multinode-570915-m02" exists ...
	I1109 14:31:20.022987  577127 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:31:20.026900  577127 main.go:143] libmachine: domain multinode-570915-m02 has defined MAC address 52:54:00:6f:bc:84 in network mk-multinode-570915
	I1109 14:31:20.027519  577127 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6f:bc:84", ip: ""} in network mk-multinode-570915: {Iface:virbr1 ExpiryTime:2025-11-09 15:29:46 +0000 UTC Type:0 Mac:52:54:00:6f:bc:84 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:multinode-570915-m02 Clientid:01:52:54:00:6f:bc:84}
	I1109 14:31:20.027554  577127 main.go:143] libmachine: domain multinode-570915-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:6f:bc:84 in network mk-multinode-570915
	I1109 14:31:20.027779  577127 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21139-549598/.minikube/machines/multinode-570915-m02/id_rsa Username:docker}
	I1109 14:31:20.112761  577127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:31:20.132664  577127 status.go:176] multinode-570915-m02 status: &{Name:multinode-570915-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1109 14:31:20.132717  577127 status.go:174] checking status of multinode-570915-m03 ...
	I1109 14:31:20.134640  577127 status.go:371] multinode-570915-m03 host status = "Stopped" (err=<nil>)
	I1109 14:31:20.134667  577127 status.go:384] host is not running, skipping remaining checks
	I1109 14:31:20.134674  577127 status.go:176] multinode-570915-m03 status: &{Name:multinode-570915-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.69s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (45.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 node start m03 -v=5 --alsologtostderr
E1109 14:31:27.378481  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-570915 node start m03 -v=5 --alsologtostderr: (44.907672714s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (45.51s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (318.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-570915
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-570915
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-570915: (2m59.336080853s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-570915 --wait=true -v=5 --alsologtostderr
E1109 14:36:07.146099  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:36:10.454172  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:36:27.379187  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-570915 --wait=true -v=5 --alsologtostderr: (2m19.193916419s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-570915
--- PASS: TestMultiNode/serial/RestartKeepsNodes (318.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-570915 node delete m03: (2.400781369s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.93s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (152.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 stop
E1109 14:39:10.218279  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-570915 stop: (2m32.612401594s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-570915 status: exit status 7 (74.230995ms)

                                                
                                                
-- stdout --
	multinode-570915
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-570915-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-570915 status --alsologtostderr: exit status 7 (70.66713ms)

                                                
                                                
-- stdout --
	multinode-570915
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-570915-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:40:00.002253  579536 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:40:00.002539  579536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:40:00.002553  579536 out.go:374] Setting ErrFile to fd 2...
	I1109 14:40:00.002558  579536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:40:00.002780  579536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
	I1109 14:40:00.003031  579536 out.go:368] Setting JSON to false
	I1109 14:40:00.003078  579536 mustload.go:66] Loading cluster: multinode-570915
	I1109 14:40:00.003204  579536 notify.go:221] Checking for updates...
	I1109 14:40:00.003618  579536 config.go:182] Loaded profile config "multinode-570915": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:40:00.003644  579536 status.go:174] checking status of multinode-570915 ...
	I1109 14:40:00.005943  579536 status.go:371] multinode-570915 host status = "Stopped" (err=<nil>)
	I1109 14:40:00.005969  579536 status.go:384] host is not running, skipping remaining checks
	I1109 14:40:00.005978  579536 status.go:176] multinode-570915 status: &{Name:multinode-570915 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 14:40:00.006036  579536 status.go:174] checking status of multinode-570915-m02 ...
	I1109 14:40:00.007471  579536 status.go:371] multinode-570915-m02 host status = "Stopped" (err=<nil>)
	I1109 14:40:00.007490  579536 status.go:384] host is not running, skipping remaining checks
	I1109 14:40:00.007495  579536 status.go:176] multinode-570915-m02 status: &{Name:multinode-570915-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (152.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (94.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-570915 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1109 14:41:07.145978  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:41:27.379287  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-570915 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m33.852984698s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-570915 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (94.43s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-570915
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-570915-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-570915-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (90.590065ms)

                                                
                                                
-- stdout --
	* [multinode-570915-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-549598/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-549598/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-570915-m02' is duplicated with machine name 'multinode-570915-m02' in profile 'multinode-570915'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-570915-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-570915-m03 --driver=kvm2  --container-runtime=crio: (43.923655775s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-570915
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-570915: exit status 80 (252.723829ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-570915 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-570915-m03 already exists in multinode-570915-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-570915-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.28s)

                                                
                                    
x
+
TestScheduledStopUnix (116.31s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-313208 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-313208 --memory=3072 --driver=kvm2  --container-runtime=crio: (44.355786531s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-313208 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-313208 -n scheduled-stop-313208
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-313208 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1109 14:45:48.420565  553473 retry.go:31] will retry after 53.756µs: open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/scheduled-stop-313208/pid: no such file or directory
I1109 14:45:48.421823  553473 retry.go:31] will retry after 145.044µs: open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/scheduled-stop-313208/pid: no such file or directory
I1109 14:45:48.423029  553473 retry.go:31] will retry after 223.757µs: open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/scheduled-stop-313208/pid: no such file or directory
I1109 14:45:48.424225  553473 retry.go:31] will retry after 450.641µs: open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/scheduled-stop-313208/pid: no such file or directory
I1109 14:45:48.425392  553473 retry.go:31] will retry after 685.266µs: open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/scheduled-stop-313208/pid: no such file or directory
I1109 14:45:48.426591  553473 retry.go:31] will retry after 716.085µs: open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/scheduled-stop-313208/pid: no such file or directory
I1109 14:45:48.427770  553473 retry.go:31] will retry after 1.582175ms: open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/scheduled-stop-313208/pid: no such file or directory
I1109 14:45:48.430094  553473 retry.go:31] will retry after 2.419654ms: open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/scheduled-stop-313208/pid: no such file or directory
I1109 14:45:48.433445  553473 retry.go:31] will retry after 3.151222ms: open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/scheduled-stop-313208/pid: no such file or directory
I1109 14:45:48.437809  553473 retry.go:31] will retry after 5.353855ms: open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/scheduled-stop-313208/pid: no such file or directory
I1109 14:45:48.444250  553473 retry.go:31] will retry after 7.860814ms: open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/scheduled-stop-313208/pid: no such file or directory
I1109 14:45:48.452609  553473 retry.go:31] will retry after 11.557823ms: open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/scheduled-stop-313208/pid: no such file or directory
I1109 14:45:48.464953  553473 retry.go:31] will retry after 9.445052ms: open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/scheduled-stop-313208/pid: no such file or directory
I1109 14:45:48.475362  553473 retry.go:31] will retry after 28.716537ms: open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/scheduled-stop-313208/pid: no such file or directory
I1109 14:45:48.504775  553473 retry.go:31] will retry after 27.987362ms: open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/scheduled-stop-313208/pid: no such file or directory
I1109 14:45:48.533344  553473 retry.go:31] will retry after 54.927529ms: open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/scheduled-stop-313208/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-313208 --cancel-scheduled
E1109 14:46:07.154084  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-313208 -n scheduled-stop-313208
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-313208
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-313208 --schedule 15s
E1109 14:46:27.379548  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-313208
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-313208: exit status 7 (78.290071ms)

                                                
                                                
-- stdout --
	scheduled-stop-313208
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-313208 -n scheduled-stop-313208
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-313208 -n scheduled-stop-313208: exit status 7 (75.245489ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-313208" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-313208
--- PASS: TestScheduledStopUnix (116.31s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (175.19s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2287469516 start -p running-upgrade-353436 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2287469516 start -p running-upgrade-353436 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m33.428934538s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-353436 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-353436 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m19.193069171s)
helpers_test.go:175: Cleaning up "running-upgrade-353436" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-353436
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-353436: (2.123574767s)
--- PASS: TestRunningBinaryUpgrade (175.19s)

                                                
                                    
x
+
TestKubernetesUpgrade (213.76s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-699004 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-699004 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m15.419839037s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-699004
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-699004: (2.621064952s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-699004 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-699004 status --format={{.Host}}: exit status 7 (91.92146ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-699004 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-699004 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m7.742040046s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-699004 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-699004 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-699004 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (122.945568ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-699004] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-549598/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-549598/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-699004
	    minikube start -p kubernetes-upgrade-699004 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6990042 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-699004 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-699004 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1109 14:51:07.145067  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-699004 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m6.546701131s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-699004" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-699004
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-699004: (1.099617645s)
--- PASS: TestKubernetesUpgrade (213.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-748314 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-748314 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (127.465849ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-748314] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-549598/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-549598/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (95.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-748314 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-748314 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m35.360920489s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-748314 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (95.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-877855 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-877855 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (163.135326ms)

                                                
                                                
-- stdout --
	* [false-877855] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-549598/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-549598/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:47:04.216463  583254 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:47:04.216924  583254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:47:04.216953  583254 out.go:374] Setting ErrFile to fd 2...
	I1109 14:47:04.216964  583254 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:47:04.217402  583254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-549598/.minikube/bin
	I1109 14:47:04.218370  583254 out.go:368] Setting JSON to false
	I1109 14:47:04.220130  583254 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":73773,"bootTime":1762625851,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 14:47:04.220331  583254 start.go:143] virtualization: kvm guest
	I1109 14:47:04.222814  583254 out.go:179] * [false-877855] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1109 14:47:04.224531  583254 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:47:04.224603  583254 notify.go:221] Checking for updates...
	I1109 14:47:04.227932  583254 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:47:04.229727  583254 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-549598/kubeconfig
	I1109 14:47:04.231396  583254 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-549598/.minikube
	I1109 14:47:04.233192  583254 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 14:47:04.234823  583254 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:47:04.237291  583254 config.go:182] Loaded profile config "NoKubernetes-748314": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:47:04.237496  583254 config.go:182] Loaded profile config "force-systemd-env-849257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:47:04.237706  583254 config.go:182] Loaded profile config "offline-crio-668437": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:47:04.237932  583254 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:47:04.285486  583254 out.go:179] * Using the kvm2 driver based on user configuration
	I1109 14:47:04.286714  583254 start.go:309] selected driver: kvm2
	I1109 14:47:04.286738  583254 start.go:930] validating driver "kvm2" against <nil>
	I1109 14:47:04.286752  583254 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:47:04.289062  583254 out.go:203] 
	W1109 14:47:04.290608  583254 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1109 14:47:04.292095  583254 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-877855 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-877855

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-877855

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-877855

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-877855

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-877855

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-877855

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-877855

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-877855

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-877855

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-877855

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-877855

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-877855" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-877855" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-877855

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-877855"

                                                
                                                
----------------------- debugLogs end: false-877855 [took: 4.09776966s] --------------------------------
helpers_test.go:175: Cleaning up "false-877855" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-877855
--- PASS: TestNetworkPlugins/group/false (4.47s)

                                                
                                    
x
+
TestISOImage/Setup (84.54s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-746433 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-746433 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m24.540274248s)
--- PASS: TestISOImage/Setup (84.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (35.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-748314 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-748314 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (33.847453154s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-748314 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-748314 status -o json: exit status 2 (255.748148ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-748314","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-748314
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-748314: (1.15624936s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (35.26s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-746433 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-746433 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-746433 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-746433 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-746433 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-746433 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-746433 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-746433 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-746433 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-746433 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.22s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-746433 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (146.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.771286546 start -p stopped-upgrade-667086 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.771286546 start -p stopped-upgrade-667086 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m21.992144095s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.771286546 -p stopped-upgrade-667086 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.771286546 -p stopped-upgrade-667086 stop: (2.074164401s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-667086 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-667086 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m2.816359871s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (146.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (66.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-748314 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-748314 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m6.517124423s)
--- PASS: TestNoKubernetes/serial/Start (66.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21139-549598/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-748314 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-748314 "sudo systemctl is-active --quiet service kubelet": exit status 1 (198.502378ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (7.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (4.899331508s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (2.114244885s)
--- PASS: TestNoKubernetes/serial/ProfileList (7.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-748314
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-748314: (1.625775793s)
--- PASS: TestNoKubernetes/serial/Stop (1.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (57.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-748314 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-748314 --driver=kvm2  --container-runtime=crio: (57.964538464s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (57.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-667086
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-667086: (1.336424069s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-748314 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-748314 "sudo systemctl is-active --quiet service kubelet": exit status 1 (218.999468ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestPause/serial/Start (116.86s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-750355 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-750355 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m56.855426361s)
--- PASS: TestPause/serial/Start (116.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (96.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-877855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-877855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m36.807752585s)
--- PASS: TestNetworkPlugins/group/auto/Start (96.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-877855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-877855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m11.581072709s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (81.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-877855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-877855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m21.750744206s)
--- PASS: TestNetworkPlugins/group/calico/Start (81.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-877855 "pgrep -a kubelet"
I1109 14:54:41.266384  553473 config.go:182] Loaded profile config "auto-877855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-877855 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9khpw" [86cdb197-90a4-4f64-bb56-336da9e4a4db] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9khpw" [86cdb197-90a4-4f64-bb56-336da9e4a4db] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.008112894s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-n9dwd" [2b4c0803-96ff-4bcd-9253-11dfdeac402c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.020189274s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-877855 "pgrep -a kubelet"
I1109 14:54:48.558460  553473 config.go:182] Loaded profile config "kindnet-877855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-877855 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context kindnet-877855 replace --force -f testdata/netcat-deployment.yaml: (1.223329613s)
I1109 14:54:50.059844  553473 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xpdxt" [14e233b2-76c7-487f-86a9-aa612ab7d751] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xpdxt" [14e233b2-76c7-487f-86a9-aa612ab7d751] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.007286155s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-877855 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-877855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-877855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-877855 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-877855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-877855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (78.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-877855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-877855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m18.623238403s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (78.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (115.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-877855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-877855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m55.119873754s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (115.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-dz6pk" [82ecdd3e-fa73-408f-a91c-dd7eb015abea] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-dz6pk" [82ecdd3e-fa73-408f-a91c-dd7eb015abea] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005978825s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-877855 "pgrep -a kubelet"
I1109 14:55:47.378382  553473 config.go:182] Loaded profile config "calico-877855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (23.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-877855 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cws55" [582899cb-8a54-4465-a27b-2c618ad280ae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1109 14:55:50.220092  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-cws55" [582899cb-8a54-4465-a27b-2c618ad280ae] Running
E1109 14:56:07.145545  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 23.008308659s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (23.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-877855 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-877855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-877855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (84.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-877855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-877855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m24.323361637s)
--- PASS: TestNetworkPlugins/group/flannel/Start (84.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-877855 "pgrep -a kubelet"
I1109 14:56:27.253353  553473 config.go:182] Loaded profile config "custom-flannel-877855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-877855 replace --force -f testdata/netcat-deployment.yaml
E1109 14:56:27.379011  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4sddl" [94bf042c-0255-40b8-95a4-2648575d68cf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4sddl" [94bf042c-0255-40b8-95a4-2648575d68cf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.068056439s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (86.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-877855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-877855 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m26.877691845s)
--- PASS: TestNetworkPlugins/group/bridge/Start (86.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-877855 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-877855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-877855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (116.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-562561 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-562561 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m56.425029059s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (116.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-877855 "pgrep -a kubelet"
I1109 14:57:13.388889  553473 config.go:182] Loaded profile config "enable-default-cni-877855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-877855 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-s77lc" [970f029d-8821-42e5-818e-b99305a7c60f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-s77lc" [970f029d-8821-42e5-818e-b99305a7c60f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.005542417s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-877855 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-877855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-877855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-ksxll" [e0566ba0-ae8a-40e0-af75-f3092a182156] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004849379s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (116.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-874801 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-874801 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m56.218833941s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (116.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-877855 "pgrep -a kubelet"
I1109 14:57:51.373225  553473 config.go:182] Loaded profile config "flannel-877855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-877855 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-g2whb" [3a59c211-7b1d-4d6c-b77a-f53f5eeac047] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-g2whb" [3a59c211-7b1d-4d6c-b77a-f53f5eeac047] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.007752833s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-877855 "pgrep -a kubelet"
I1109 14:57:57.312633  553473 config.go:182] Loaded profile config "bridge-877855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-877855 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-f6wxt" [a83d3f27-14a7-4a3d-b760-d9d7d8357dd5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-f6wxt" [a83d3f27-14a7-4a3d-b760-d9d7d8357dd5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.008287447s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-877855 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-877855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-877855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-877855 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-877855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-877855 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (66.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-451846 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-451846 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m6.363306074s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (66.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (118.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-711258 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-711258 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m58.027905432s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (118.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-562561 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [29cb5147-f0fa-47b1-8113-459a074f1b92] Pending
helpers_test.go:352: "busybox" [29cb5147-f0fa-47b1-8113-459a074f1b92] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [29cb5147-f0fa-47b1-8113-459a074f1b92] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.006340834s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-562561 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-562561 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-562561 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.388080647s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-562561 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (86.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-562561 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-562561 --alsologtostderr -v=3: (1m26.871179458s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (86.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-451846 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [959b3e32-6ecf-46f4-afc7-663c9d02406a] Pending
helpers_test.go:352: "busybox" [959b3e32-6ecf-46f4-afc7-663c9d02406a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [959b3e32-6ecf-46f4-afc7-663c9d02406a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.006540025s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-451846 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-451846 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1109 14:59:41.533112  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:59:41.539727  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:59:41.551395  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:59:41.573040  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:59:41.614873  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:59:41.696954  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-451846 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.292740933s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-451846 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-874801 create -f testdata/busybox.yaml
E1109 14:59:41.858815  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4cfda939-ef81-4b9d-8bb4-614e048d466d] Pending
E1109 14:59:42.181084  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:59:42.245582  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/kindnet-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:59:42.252216  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/kindnet-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:59:42.263750  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/kindnet-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:59:42.285346  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/kindnet-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:59:42.327027  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/kindnet-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:59:42.408844  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/kindnet-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:59:42.571038  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/kindnet-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [4cfda939-ef81-4b9d-8bb4-614e048d466d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1109 14:59:43.534952  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/kindnet-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:59:44.104422  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:59:44.816868  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/kindnet-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [4cfda939-ef81-4b9d-8bb4-614e048d466d] Running
E1109 14:59:46.666706  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:59:47.379024  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/kindnet-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004186071s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-874801 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (86.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-451846 --alsologtostderr -v=3
E1109 14:59:42.822999  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:59:42.893224  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/kindnet-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-451846 --alsologtostderr -v=3: (1m26.622486624s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (86.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-874801 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1109 14:59:51.788430  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-874801 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.059772685s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-874801 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (89.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-874801 --alsologtostderr -v=3
E1109 14:59:52.500908  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/kindnet-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:00:02.030011  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:00:02.743075  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/kindnet-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:00:22.512504  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:00:23.225170  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/kindnet-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-874801 --alsologtostderr -v=3: (1m29.278439094s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (89.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-711258 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4e0dbaba-45e1-43e6-b5b3-609c0292670b] Pending
helpers_test.go:352: "busybox" [4e0dbaba-45e1-43e6-b5b3-609c0292670b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4e0dbaba-45e1-43e6-b5b3-609c0292670b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004762826s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-711258 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-562561 -n old-k8s-version-562561
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-562561 -n old-k8s-version-562561: exit status 7 (72.452943ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-562561 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-562561 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-562561 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (50.170927937s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-562561 -n old-k8s-version-562561
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-711258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-711258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.162229171s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-711258 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (90.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-711258 --alsologtostderr -v=3
E1109 15:00:41.106744  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/calico-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:00:41.113303  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/calico-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:00:41.124897  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/calico-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:00:41.146487  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/calico-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:00:41.187989  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/calico-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:00:41.270247  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/calico-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:00:41.431919  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/calico-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:00:41.754031  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/calico-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:00:42.395977  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/calico-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:00:43.677496  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/calico-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:00:46.239676  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/calico-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:00:51.361676  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/calico-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:01:01.604037  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/calico-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:01:03.474337  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:01:04.187545  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/kindnet-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:01:07.145242  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/functional-419649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-711258 --alsologtostderr -v=3: (1m30.769869581s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (90.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-451846 -n embed-certs-451846
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-451846 -n embed-certs-451846: exit status 7 (81.316736ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-451846 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (53.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-451846 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-451846 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (53.391309502s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-451846 -n embed-certs-451846
E1109 15:02:03.048489  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/calico-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (53.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-874801 -n no-preload-874801
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-874801 -n no-preload-874801: exit status 7 (101.781628ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-874801 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (76.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-874801 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1109 15:01:22.086061  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/calico-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-874801 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m15.752295564s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-874801 -n no-preload-874801
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (76.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (15.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-5vngw" [5763e02a-a37d-487f-8554-5f2c759e4ad0] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1109 15:01:27.378994  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/addons-640912/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:01:27.628266  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/custom-flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:01:27.634871  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/custom-flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:01:27.646476  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/custom-flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:01:27.668618  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/custom-flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:01:27.710229  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/custom-flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:01:27.792312  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/custom-flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:01:27.954724  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/custom-flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:01:28.276961  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/custom-flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:01:28.920291  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/custom-flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:01:30.202101  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/custom-flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:01:32.763862  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/custom-flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-5vngw" [5763e02a-a37d-487f-8554-5f2c759e4ad0] Running
E1109 15:01:37.886532  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/custom-flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.009688531s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (15.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-5vngw" [5763e02a-a37d-487f-8554-5f2c759e4ad0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004896609s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-562561 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-562561 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-562561 --alsologtostderr -v=1
E1109 15:01:48.128388  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/custom-flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-562561 --alsologtostderr -v=1: (1.272087034s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-562561 -n old-k8s-version-562561
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-562561 -n old-k8s-version-562561: exit status 2 (303.040131ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-562561 -n old-k8s-version-562561
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-562561 -n old-k8s-version-562561: exit status 2 (304.557274ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-562561 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-562561 -n old-k8s-version-562561
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-562561 -n old-k8s-version-562561
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (59.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-188575 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-188575 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (59.405961087s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (59.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jqpcc" [a5b363dc-9f65-44b7-b92f-aed957343177] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1109 15:02:08.610108  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/custom-flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jqpcc" [a5b363dc-9f65-44b7-b92f-aed957343177] Running
E1109 15:02:13.729389  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/enable-default-cni-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:13.736021  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/enable-default-cni-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:13.747703  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/enable-default-cni-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:13.769463  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/enable-default-cni-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:13.811635  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/enable-default-cni-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:13.894037  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/enable-default-cni-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:14.056420  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/enable-default-cni-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:14.378684  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/enable-default-cni-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:15.020146  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/enable-default-cni-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:16.301981  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/enable-default-cni-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.006765071s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-711258 -n default-k8s-diff-port-711258
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-711258 -n default-k8s-diff-port-711258: exit status 7 (109.039906ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-711258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (64.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-711258 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-711258 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m3.75564811s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-711258 -n default-k8s-diff-port-711258
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (64.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jqpcc" [a5b363dc-9f65-44b7-b92f-aed957343177] Running
E1109 15:02:18.863845  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/enable-default-cni-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006854797s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-451846 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-451846 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-451846 --alsologtostderr -v=1
E1109 15:02:23.985828  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/enable-default-cni-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-451846 --alsologtostderr -v=1: (1.388634023s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-451846 -n embed-certs-451846
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-451846 -n embed-certs-451846: exit status 2 (345.257975ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-451846 -n embed-certs-451846
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-451846 -n embed-certs-451846: exit status 2 (333.767302ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-451846 --alsologtostderr -v=1
E1109 15:02:25.395838  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/auto-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:26.109861  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/kindnet-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-451846 --alsologtostderr -v=1: (1.467347716s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-451846 -n embed-certs-451846
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-451846 -n embed-certs-451846
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hz4j2" [52ac8623-b445-4176-831e-0863a80cd546] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1109 15:02:45.138877  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:45.145467  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:45.157870  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:45.179431  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:45.221063  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:45.302832  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:45.464452  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:45.786534  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:46.429415  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hz4j2" [52ac8623-b445-4176-831e-0863a80cd546] Running
E1109 15:02:47.710907  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:49.572275  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/custom-flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:50.273107  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.008200165s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hz4j2" [52ac8623-b445-4176-831e-0863a80cd546] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005301333s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-874801 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-188575 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-188575 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.582682661s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-188575 --alsologtostderr -v=3
E1109 15:02:54.709792  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/enable-default-cni-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:55.394555  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-188575 --alsologtostderr -v=3: (11.12916425s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-874801 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-874801 --alsologtostderr -v=1
E1109 15:02:57.658317  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/bridge-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:57.665471  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/bridge-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:57.677106  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/bridge-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:57.698932  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/bridge-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:57.740594  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/bridge-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:57.822952  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/bridge-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:02:57.985401  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/bridge-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-874801 --alsologtostderr -v=1: (1.054720141s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-874801 -n no-preload-874801
E1109 15:02:58.307535  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/bridge-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-874801 -n no-preload-874801: exit status 2 (265.173339ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-874801 -n no-preload-874801
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-874801 -n no-preload-874801: exit status 2 (266.176377ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-874801 --alsologtostderr -v=1
E1109 15:02:58.949524  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/bridge-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-874801 -n no-preload-874801
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-874801 -n no-preload-874801
E1109 15:03:00.230936  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/bridge-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-188575 -n newest-cni-188575
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-188575 -n newest-cni-188575: exit status 7 (81.311523ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-188575 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (40.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-188575 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1109 15:03:05.636485  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:03:07.915223  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/bridge-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-188575 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (40.030352315s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-188575 -n newest-cni-188575
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (40.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xnw72" [45b5f70f-772e-4dc0-9f23-3fbfb2c2dc67] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xnw72" [45b5f70f-772e-4dc0-9f23-3fbfb2c2dc67] Running
E1109 15:03:18.157289  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/bridge-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.004525956s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xnw72" [45b5f70f-772e-4dc0-9f23-3fbfb2c2dc67] Running
E1109 15:03:24.970542  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/calico-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 15:03:26.117977  553473 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-549598/.minikube/profiles/flannel-877855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005670769s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-711258 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-711258 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-711258 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-711258 --alsologtostderr -v=1: (1.149937913s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-711258 -n default-k8s-diff-port-711258
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-711258 -n default-k8s-diff-port-711258: exit status 2 (275.875261ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-711258 -n default-k8s-diff-port-711258
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-711258 -n default-k8s-diff-port-711258: exit status 2 (284.062156ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-711258 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-711258 -n default-k8s-diff-port-711258
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-711258 -n default-k8s-diff-port-711258
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-188575 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-188575 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-188575 --alsologtostderr -v=1: (1.686529578s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-188575 -n newest-cni-188575
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-188575 -n newest-cni-188575: exit status 2 (317.650057ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-188575 -n newest-cni-188575
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-188575 -n newest-cni-188575: exit status 2 (291.146484ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-188575 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-188575 -n newest-cni-188575
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-188575 -n newest-cni-188575
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.12s)

                                                
                                    

Test skip (40/345)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.37
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
130 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
131 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
134 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
135 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
258 TestNetworkPlugins/group/kubenet 4.21
267 TestNetworkPlugins/group/cilium 4.73
295 TestStartStop/group/disable-driver-mounts 0.21
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.37s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-640912 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.37s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-877855 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-877855

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-877855

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-877855

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-877855

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-877855

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-877855

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-877855

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-877855

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-877855

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-877855

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-877855

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-877855" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-877855" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-877855

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-877855"

                                                
                                                
----------------------- debugLogs end: kubenet-877855 [took: 3.989820519s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-877855" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-877855
--- SKIP: TestNetworkPlugins/group/kubenet (4.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-877855 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-877855

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-877855

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-877855

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-877855

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-877855

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-877855

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-877855

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-877855

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-877855

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-877855

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-877855

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-877855" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-877855

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-877855

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-877855

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-877855

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-877855" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-877855" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-877855

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-877855" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877855"

                                                
                                                
----------------------- debugLogs end: cilium-877855 [took: 4.520251414s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-877855" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-877855
--- SKIP: TestNetworkPlugins/group/cilium (4.73s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-536733" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-536733
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
Copied to clipboard