=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run: kubectl --context addons-886556 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run: kubectl --context addons-886556 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run: kubectl --context addons-886556 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [6dccff02-c09a-4293-83a1-fd22a7c40b8c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [6dccff02-c09a-4293-83a1-fd22a7c40b8c] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 15.004125961s
I1217 19:24:13.020919 7531 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run: out/minikube-linux-amd64 -p addons-886556 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-886556 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.587147263s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run: kubectl --context addons-886556 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run: out/minikube-linux-amd64 -p addons-886556 ip
addons_test.go:301: (dbg) Run: nslookup hello-john.test 192.168.39.92
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-886556 -n addons-886556
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-amd64 -p addons-886556 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-886556 logs -n 25: (1.315490806s)
helpers_test.go:261: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ delete │ -p download-only-238357 │ download-only-238357 │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │ 17 Dec 25 19:20 UTC │
│ start │ --download-only -p binary-mirror-144298 --alsologtostderr --binary-mirror http://127.0.0.1:44329 --driver=kvm2 --container-runtime=crio │ binary-mirror-144298 │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │ │
│ delete │ -p binary-mirror-144298 │ binary-mirror-144298 │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │ 17 Dec 25 19:20 UTC │
│ addons │ disable dashboard -p addons-886556 │ addons-886556 │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │ │
│ addons │ enable dashboard -p addons-886556 │ addons-886556 │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │ │
│ start │ -p addons-886556 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-886556 │ jenkins │ v1.37.0 │ 17 Dec 25 19:20 UTC │ 17 Dec 25 19:23 UTC │
│ addons │ addons-886556 addons disable volcano --alsologtostderr -v=1 │ addons-886556 │ jenkins │ v1.37.0 │ 17 Dec 25 19:23 UTC │ 17 Dec 25 19:23 UTC │
│ addons │ addons-886556 addons disable gcp-auth --alsologtostderr -v=1 │ addons-886556 │ jenkins │ v1.37.0 │ 17 Dec 25 19:23 UTC │ 17 Dec 25 19:23 UTC │
│ addons │ enable headlamp -p addons-886556 --alsologtostderr -v=1 │ addons-886556 │ jenkins │ v1.37.0 │ 17 Dec 25 19:23 UTC │ 17 Dec 25 19:23 UTC │
│ addons │ addons-886556 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-886556 │ jenkins │ v1.37.0 │ 17 Dec 25 19:23 UTC │ 17 Dec 25 19:23 UTC │
│ addons │ addons-886556 addons disable metrics-server --alsologtostderr -v=1 │ addons-886556 │ jenkins │ v1.37.0 │ 17 Dec 25 19:23 UTC │ 17 Dec 25 19:23 UTC │
│ addons │ addons-886556 addons disable headlamp --alsologtostderr -v=1 │ addons-886556 │ jenkins │ v1.37.0 │ 17 Dec 25 19:23 UTC │ 17 Dec 25 19:24 UTC │
│ ssh │ addons-886556 ssh cat /opt/local-path-provisioner/pvc-51a5db76-42c3-423c-b2d7-c24e496695a8_default_test-pvc/file1 │ addons-886556 │ jenkins │ v1.37.0 │ 17 Dec 25 19:23 UTC │ 17 Dec 25 19:23 UTC │
│ addons │ addons-886556 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-886556 │ jenkins │ v1.37.0 │ 17 Dec 25 19:23 UTC │ 17 Dec 25 19:24 UTC │
│ ip │ addons-886556 ip │ addons-886556 │ jenkins │ v1.37.0 │ 17 Dec 25 19:23 UTC │ 17 Dec 25 19:23 UTC │
│ addons │ addons-886556 addons disable registry --alsologtostderr -v=1 │ addons-886556 │ jenkins │ v1.37.0 │ 17 Dec 25 19:23 UTC │ 17 Dec 25 19:23 UTC │
│ addons │ addons-886556 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-886556 │ jenkins │ v1.37.0 │ 17 Dec 25 19:23 UTC │ 17 Dec 25 19:24 UTC │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-886556 │ addons-886556 │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
│ addons │ addons-886556 addons disable registry-creds --alsologtostderr -v=1 │ addons-886556 │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
│ addons │ addons-886556 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-886556 │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
│ addons │ addons-886556 addons disable yakd --alsologtostderr -v=1 │ addons-886556 │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
│ ssh │ addons-886556 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-886556 │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ │
│ addons │ addons-886556 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-886556 │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
│ addons │ addons-886556 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-886556 │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
│ ip │ addons-886556 ip │ addons-886556 │ jenkins │ v1.37.0 │ 17 Dec 25 19:26 UTC │ 17 Dec 25 19:26 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/17 19:20:57
Running on machine: ubuntu-20-agent-11
Binary: Built with gc go1.25.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1217 19:20:57.823805 8502 out.go:360] Setting OutFile to fd 1 ...
I1217 19:20:57.823894 8502 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:20:57.823905 8502 out.go:374] Setting ErrFile to fd 2...
I1217 19:20:57.823912 8502 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:20:57.824114 8502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-3611/.minikube/bin
I1217 19:20:57.824672 8502 out.go:368] Setting JSON to false
I1217 19:20:57.825516 8502 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":197,"bootTime":1765999061,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1217 19:20:57.825586 8502 start.go:143] virtualization: kvm guest
I1217 19:20:57.827588 8502 out.go:179] * [addons-886556] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1217 19:20:57.828989 8502 out.go:179] - MINIKUBE_LOCATION=22186
I1217 19:20:57.828987 8502 notify.go:221] Checking for updates...
I1217 19:20:57.830423 8502 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1217 19:20:57.831836 8502 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22186-3611/kubeconfig
I1217 19:20:57.833027 8502 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-3611/.minikube
I1217 19:20:57.837781 8502 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1217 19:20:57.839177 8502 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1217 19:20:57.840581 8502 driver.go:422] Setting default libvirt URI to qemu:///system
I1217 19:20:57.870963 8502 out.go:179] * Using the kvm2 driver based on user configuration
I1217 19:20:57.872099 8502 start.go:309] selected driver: kvm2
I1217 19:20:57.872111 8502 start.go:927] validating driver "kvm2" against <nil>
I1217 19:20:57.872128 8502 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1217 19:20:57.872827 8502 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1217 19:20:57.873031 8502 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1217 19:20:57.873056 8502 cni.go:84] Creating CNI manager for ""
I1217 19:20:57.873092 8502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1217 19:20:57.873101 8502 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1217 19:20:57.873133 8502 start.go:353] cluster config:
{Name:addons-886556 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-886556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
I1217 19:20:57.873230 8502 iso.go:125] acquiring lock: {Name:mkf0d7f706dad630931de886de0fce55b517853c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1217 19:20:57.874622 8502 out.go:179] * Starting "addons-886556" primary control-plane node in "addons-886556" cluster
I1217 19:20:57.875697 8502 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
I1217 19:20:57.875729 8502 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-3611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
I1217 19:20:57.875739 8502 cache.go:65] Caching tarball of preloaded images
I1217 19:20:57.875830 8502 preload.go:238] Found /home/jenkins/minikube-integration/22186-3611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1217 19:20:57.875843 8502 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
I1217 19:20:57.876160 8502 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/config.json ...
I1217 19:20:57.876189 8502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/config.json: {Name:mk4dda90071125ffcf60327ec69d165b551492dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 19:20:57.876361 8502 start.go:360] acquireMachinesLock for addons-886556: {Name:mk03890d04d41d66ccbc23571d0f065ba20ffda0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1217 19:20:57.876430 8502 start.go:364] duration metric: took 54.024µs to acquireMachinesLock for "addons-886556"
I1217 19:20:57.876455 8502 start.go:93] Provisioning new machine with config: &{Name:addons-886556 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:addons-886556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
I1217 19:20:57.876549 8502 start.go:125] createHost starting for "" (driver="kvm2")
I1217 19:20:57.878048 8502 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1217 19:20:57.878207 8502 start.go:159] libmachine.API.Create for "addons-886556" (driver="kvm2")
I1217 19:20:57.878238 8502 client.go:173] LocalClient.Create starting
I1217 19:20:57.878315 8502 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem
I1217 19:20:57.968368 8502 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/cert.pem
I1217 19:20:58.045418 8502 main.go:143] libmachine: creating domain...
I1217 19:20:58.045441 8502 main.go:143] libmachine: creating network...
I1217 19:20:58.046789 8502 main.go:143] libmachine: found existing default network
I1217 19:20:58.047027 8502 main.go:143] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1217 19:20:58.047554 8502 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ce5f60}
I1217 19:20:58.047650 8502 main.go:143] libmachine: defining private network:
<network>
<name>mk-addons-886556</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1217 19:20:58.053630 8502 main.go:143] libmachine: creating private network mk-addons-886556 192.168.39.0/24...
I1217 19:20:58.120662 8502 main.go:143] libmachine: private network mk-addons-886556 192.168.39.0/24 created
I1217 19:20:58.120948 8502 main.go:143] libmachine: <network>
<name>mk-addons-886556</name>
<uuid>aca24f78-8089-400f-af3e-2df8ba584310</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:75:6e:c1'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1217 19:20:58.121007 8502 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556 ...
I1217 19:20:58.121045 8502 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22186-3611/.minikube/cache/iso/amd64/minikube-v1.37.0-1765965980-22186-amd64.iso
I1217 19:20:58.121059 8502 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22186-3611/.minikube
I1217 19:20:58.121140 8502 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22186-3611/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22186-3611/.minikube/cache/iso/amd64/minikube-v1.37.0-1765965980-22186-amd64.iso...
I1217 19:20:58.411320 8502 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa...
I1217 19:20:58.479620 8502 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/addons-886556.rawdisk...
I1217 19:20:58.479665 8502 main.go:143] libmachine: Writing magic tar header
I1217 19:20:58.479692 8502 main.go:143] libmachine: Writing SSH key tar header
I1217 19:20:58.479797 8502 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556 ...
I1217 19:20:58.479877 8502 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556
I1217 19:20:58.479925 8502 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556 (perms=drwx------)
I1217 19:20:58.479953 8502 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22186-3611/.minikube/machines
I1217 19:20:58.479972 8502 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22186-3611/.minikube/machines (perms=drwxr-xr-x)
I1217 19:20:58.479990 8502 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22186-3611/.minikube
I1217 19:20:58.480009 8502 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22186-3611/.minikube (perms=drwxr-xr-x)
I1217 19:20:58.480026 8502 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22186-3611
I1217 19:20:58.480043 8502 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22186-3611 (perms=drwxrwxr-x)
I1217 19:20:58.480060 8502 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1217 19:20:58.480074 8502 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1217 19:20:58.480087 8502 main.go:143] libmachine: checking permissions on dir: /home/jenkins
I1217 19:20:58.480112 8502 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1217 19:20:58.480131 8502 main.go:143] libmachine: checking permissions on dir: /home
I1217 19:20:58.480144 8502 main.go:143] libmachine: skipping /home - not owner
I1217 19:20:58.480151 8502 main.go:143] libmachine: defining domain...
I1217 19:20:58.481444 8502 main.go:143] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-886556</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/addons-886556.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-886556'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1217 19:20:58.489252 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:ee:de:94 in network default
I1217 19:20:58.489890 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:20:58.489913 8502 main.go:143] libmachine: starting domain...
I1217 19:20:58.489919 8502 main.go:143] libmachine: ensuring networks are active...
I1217 19:20:58.490758 8502 main.go:143] libmachine: Ensuring network default is active
I1217 19:20:58.491128 8502 main.go:143] libmachine: Ensuring network mk-addons-886556 is active
I1217 19:20:58.492042 8502 main.go:143] libmachine: getting domain XML...
I1217 19:20:58.493268 8502 main.go:143] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-886556</name>
<uuid>9d7dd346-d2b7-4fec-936f-08e6e7425367</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/addons-886556.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:a0:a1:59'/>
<source network='mk-addons-886556'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:ee:de:94'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1217 19:20:59.792041 8502 main.go:143] libmachine: waiting for domain to start...
I1217 19:20:59.793336 8502 main.go:143] libmachine: domain is now running
I1217 19:20:59.793358 8502 main.go:143] libmachine: waiting for IP...
I1217 19:20:59.794012 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:20:59.794436 8502 main.go:143] libmachine: no network interface addresses found for domain addons-886556 (source=lease)
I1217 19:20:59.794450 8502 main.go:143] libmachine: trying to list again with source=arp
I1217 19:20:59.794776 8502 main.go:143] libmachine: unable to find current IP address of domain addons-886556 in network mk-addons-886556 (interfaces detected: [])
I1217 19:20:59.794825 8502 retry.go:31] will retry after 203.171763ms: waiting for domain to come up
I1217 19:20:59.999183 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:20:59.999819 8502 main.go:143] libmachine: no network interface addresses found for domain addons-886556 (source=lease)
I1217 19:20:59.999836 8502 main.go:143] libmachine: trying to list again with source=arp
I1217 19:21:00.000205 8502 main.go:143] libmachine: unable to find current IP address of domain addons-886556 in network mk-addons-886556 (interfaces detected: [])
I1217 19:21:00.000257 8502 retry.go:31] will retry after 280.603302ms: waiting for domain to come up
I1217 19:21:00.282706 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:00.283209 8502 main.go:143] libmachine: no network interface addresses found for domain addons-886556 (source=lease)
I1217 19:21:00.283222 8502 main.go:143] libmachine: trying to list again with source=arp
I1217 19:21:00.283475 8502 main.go:143] libmachine: unable to find current IP address of domain addons-886556 in network mk-addons-886556 (interfaces detected: [])
I1217 19:21:00.283508 8502 retry.go:31] will retry after 307.303733ms: waiting for domain to come up
I1217 19:21:00.591871 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:00.592310 8502 main.go:143] libmachine: no network interface addresses found for domain addons-886556 (source=lease)
I1217 19:21:00.592326 8502 main.go:143] libmachine: trying to list again with source=arp
I1217 19:21:00.592644 8502 main.go:143] libmachine: unable to find current IP address of domain addons-886556 in network mk-addons-886556 (interfaces detected: [])
I1217 19:21:00.592686 8502 retry.go:31] will retry after 610.242195ms: waiting for domain to come up
I1217 19:21:01.204023 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:01.204710 8502 main.go:143] libmachine: no network interface addresses found for domain addons-886556 (source=lease)
I1217 19:21:01.204727 8502 main.go:143] libmachine: trying to list again with source=arp
I1217 19:21:01.205013 8502 main.go:143] libmachine: unable to find current IP address of domain addons-886556 in network mk-addons-886556 (interfaces detected: [])
I1217 19:21:01.205045 8502 retry.go:31] will retry after 740.456865ms: waiting for domain to come up
I1217 19:21:01.946747 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:01.947444 8502 main.go:143] libmachine: no network interface addresses found for domain addons-886556 (source=lease)
I1217 19:21:01.947463 8502 main.go:143] libmachine: trying to list again with source=arp
I1217 19:21:01.947761 8502 main.go:143] libmachine: unable to find current IP address of domain addons-886556 in network mk-addons-886556 (interfaces detected: [])
I1217 19:21:01.947803 8502 retry.go:31] will retry after 844.164568ms: waiting for domain to come up
I1217 19:21:02.794100 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:02.794738 8502 main.go:143] libmachine: no network interface addresses found for domain addons-886556 (source=lease)
I1217 19:21:02.794757 8502 main.go:143] libmachine: trying to list again with source=arp
I1217 19:21:02.795063 8502 main.go:143] libmachine: unable to find current IP address of domain addons-886556 in network mk-addons-886556 (interfaces detected: [])
I1217 19:21:02.795115 8502 retry.go:31] will retry after 779.073526ms: waiting for domain to come up
I1217 19:21:03.575927 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:03.576568 8502 main.go:143] libmachine: no network interface addresses found for domain addons-886556 (source=lease)
I1217 19:21:03.576588 8502 main.go:143] libmachine: trying to list again with source=arp
I1217 19:21:03.576834 8502 main.go:143] libmachine: unable to find current IP address of domain addons-886556 in network mk-addons-886556 (interfaces detected: [])
I1217 19:21:03.576865 8502 retry.go:31] will retry after 1.230149664s: waiting for domain to come up
I1217 19:21:04.809397 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:04.810030 8502 main.go:143] libmachine: no network interface addresses found for domain addons-886556 (source=lease)
I1217 19:21:04.810047 8502 main.go:143] libmachine: trying to list again with source=arp
I1217 19:21:04.810336 8502 main.go:143] libmachine: unable to find current IP address of domain addons-886556 in network mk-addons-886556 (interfaces detected: [])
I1217 19:21:04.810388 8502 retry.go:31] will retry after 1.834558493s: waiting for domain to come up
I1217 19:21:06.647381 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:06.647919 8502 main.go:143] libmachine: no network interface addresses found for domain addons-886556 (source=lease)
I1217 19:21:06.647934 8502 main.go:143] libmachine: trying to list again with source=arp
I1217 19:21:06.648189 8502 main.go:143] libmachine: unable to find current IP address of domain addons-886556 in network mk-addons-886556 (interfaces detected: [])
I1217 19:21:06.648218 8502 retry.go:31] will retry after 1.980010423s: waiting for domain to come up
I1217 19:21:08.629424 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:08.630069 8502 main.go:143] libmachine: no network interface addresses found for domain addons-886556 (source=lease)
I1217 19:21:08.630090 8502 main.go:143] libmachine: trying to list again with source=arp
I1217 19:21:08.630396 8502 main.go:143] libmachine: unable to find current IP address of domain addons-886556 in network mk-addons-886556 (interfaces detected: [])
I1217 19:21:08.630429 8502 retry.go:31] will retry after 2.681115886s: waiting for domain to come up
I1217 19:21:11.312827 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:11.313414 8502 main.go:143] libmachine: no network interface addresses found for domain addons-886556 (source=lease)
I1217 19:21:11.313430 8502 main.go:143] libmachine: trying to list again with source=arp
I1217 19:21:11.313777 8502 main.go:143] libmachine: unable to find current IP address of domain addons-886556 in network mk-addons-886556 (interfaces detected: [])
I1217 19:21:11.313817 8502 retry.go:31] will retry after 2.507746112s: waiting for domain to come up
I1217 19:21:13.823749 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:13.824640 8502 main.go:143] libmachine: domain addons-886556 has current primary IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:13.824665 8502 main.go:143] libmachine: found domain IP: 192.168.39.92
I1217 19:21:13.824676 8502 main.go:143] libmachine: reserving static IP address...
I1217 19:21:13.825231 8502 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-886556", mac: "52:54:00:a0:a1:59", ip: "192.168.39.92"} in network mk-addons-886556
I1217 19:21:14.048396 8502 main.go:143] libmachine: reserved static IP address 192.168.39.92 for domain addons-886556
I1217 19:21:14.048420 8502 main.go:143] libmachine: waiting for SSH...
I1217 19:21:14.048428 8502 main.go:143] libmachine: Getting to WaitForSSH function...
I1217 19:21:14.051179 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:14.051661 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:14.051695 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:14.051903 8502 main.go:143] libmachine: Using SSH client type: native
I1217 19:21:14.052109 8502 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.92 22 <nil> <nil>}
I1217 19:21:14.052121 8502 main.go:143] libmachine: About to run SSH command:
exit 0
I1217 19:21:14.169401 8502 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1217 19:21:14.169798 8502 main.go:143] libmachine: domain creation complete
I1217 19:21:14.171349 8502 machine.go:94] provisionDockerMachine start ...
I1217 19:21:14.173680 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:14.174091 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:14.174117 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:14.174331 8502 main.go:143] libmachine: Using SSH client type: native
I1217 19:21:14.174612 8502 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.92 22 <nil> <nil>}
I1217 19:21:14.174624 8502 main.go:143] libmachine: About to run SSH command:
hostname
I1217 19:21:14.296862 8502 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
I1217 19:21:14.296887 8502 buildroot.go:166] provisioning hostname "addons-886556"
I1217 19:21:14.300271 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:14.300797 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:14.300831 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:14.301020 8502 main.go:143] libmachine: Using SSH client type: native
I1217 19:21:14.301258 8502 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.92 22 <nil> <nil>}
I1217 19:21:14.301271 8502 main.go:143] libmachine: About to run SSH command:
sudo hostname addons-886556 && echo "addons-886556" | sudo tee /etc/hostname
I1217 19:21:14.439027 8502 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-886556
I1217 19:21:14.441944 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:14.442388 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:14.442408 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:14.442625 8502 main.go:143] libmachine: Using SSH client type: native
I1217 19:21:14.442838 8502 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.92 22 <nil> <nil>}
I1217 19:21:14.442852 8502 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-886556' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-886556/g' /etc/hosts;
else
echo '127.0.1.1 addons-886556' | sudo tee -a /etc/hosts;
fi
fi
I1217 19:21:14.572842 8502 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1217 19:21:14.572868 8502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22186-3611/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-3611/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-3611/.minikube}
I1217 19:21:14.572884 8502 buildroot.go:174] setting up certificates
I1217 19:21:14.572894 8502 provision.go:84] configureAuth start
I1217 19:21:14.575876 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:14.576389 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:14.576421 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:14.579055 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:14.579501 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:14.579544 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:14.579716 8502 provision.go:143] copyHostCerts
I1217 19:21:14.579805 8502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-3611/.minikube/ca.pem (1082 bytes)
I1217 19:21:14.579915 8502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-3611/.minikube/cert.pem (1123 bytes)
I1217 19:21:14.579969 8502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-3611/.minikube/key.pem (1679 bytes)
I1217 19:21:14.580013 8502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-3611/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca-key.pem org=jenkins.addons-886556 san=[127.0.0.1 192.168.39.92 addons-886556 localhost minikube]
I1217 19:21:14.648029 8502 provision.go:177] copyRemoteCerts
I1217 19:21:14.648091 8502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1217 19:21:14.650785 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:14.651200 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:14.651223 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:14.651405 8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
I1217 19:21:14.743301 8502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1217 19:21:14.777058 8502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1217 19:21:14.810665 8502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1217 19:21:14.844917 8502 provision.go:87] duration metric: took 272.010654ms to configureAuth
I1217 19:21:14.844949 8502 buildroot.go:189] setting minikube options for container-runtime
I1217 19:21:14.845167 8502 config.go:182] Loaded profile config "addons-886556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 19:21:14.848018 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:14.848486 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:14.848518 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:14.848707 8502 main.go:143] libmachine: Using SSH client type: native
I1217 19:21:14.848902 8502 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.92 22 <nil> <nil>}
I1217 19:21:14.848916 8502 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1217 19:21:15.186908 8502 main.go:143] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1217 19:21:15.186934 8502 machine.go:97] duration metric: took 1.015568656s to provisionDockerMachine
I1217 19:21:15.186944 8502 client.go:176] duration metric: took 17.308699397s to LocalClient.Create
I1217 19:21:15.186960 8502 start.go:167] duration metric: took 17.308754047s to libmachine.API.Create "addons-886556"
I1217 19:21:15.186968 8502 start.go:293] postStartSetup for "addons-886556" (driver="kvm2")
I1217 19:21:15.186976 8502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1217 19:21:15.187049 8502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1217 19:21:15.190125 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:15.190549 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:15.190578 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:15.190755 8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
I1217 19:21:15.279891 8502 ssh_runner.go:195] Run: cat /etc/os-release
I1217 19:21:15.284804 8502 info.go:137] Remote host: Buildroot 2025.02
I1217 19:21:15.284835 8502 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-3611/.minikube/addons for local assets ...
I1217 19:21:15.284910 8502 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-3611/.minikube/files for local assets ...
I1217 19:21:15.284951 8502 start.go:296] duration metric: took 97.977625ms for postStartSetup
I1217 19:21:15.289352 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:15.289712 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:15.289735 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:15.289917 8502 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/config.json ...
I1217 19:21:15.290091 8502 start.go:128] duration metric: took 17.413531228s to createHost
I1217 19:21:15.292224 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:15.292627 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:15.292653 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:15.292839 8502 main.go:143] libmachine: Using SSH client type: native
I1217 19:21:15.293088 8502 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.92 22 <nil> <nil>}
I1217 19:21:15.293100 8502 main.go:143] libmachine: About to run SSH command:
date +%s.%N
I1217 19:21:15.411819 8502 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765999275.367003414
I1217 19:21:15.411852 8502 fix.go:216] guest clock: 1765999275.367003414
I1217 19:21:15.411862 8502 fix.go:229] Guest: 2025-12-17 19:21:15.367003414 +0000 UTC Remote: 2025-12-17 19:21:15.290103157 +0000 UTC m=+17.513279926 (delta=76.900257ms)
I1217 19:21:15.411884 8502 fix.go:200] guest clock delta is within tolerance: 76.900257ms
I1217 19:21:15.411890 8502 start.go:83] releasing machines lock for "addons-886556", held for 17.53544805s
I1217 19:21:15.414616 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:15.414966 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:15.414995 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:15.415585 8502 ssh_runner.go:195] Run: cat /version.json
I1217 19:21:15.415622 8502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1217 19:21:15.418706 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:15.418738 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:15.419111 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:15.419171 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:15.419179 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:15.419198 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:15.419421 8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
I1217 19:21:15.419430 8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
I1217 19:21:15.537520 8502 ssh_runner.go:195] Run: systemctl --version
I1217 19:21:15.544400 8502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1217 19:21:15.704034 8502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1217 19:21:15.711391 8502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1217 19:21:15.711472 8502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1217 19:21:15.735074 8502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1217 19:21:15.735111 8502 start.go:496] detecting cgroup driver to use...
I1217 19:21:15.735187 8502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1217 19:21:15.762556 8502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1217 19:21:15.785216 8502 docker.go:218] disabling cri-docker service (if available) ...
I1217 19:21:15.785286 8502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1217 19:21:15.804692 8502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1217 19:21:15.822494 8502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1217 19:21:15.974641 8502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1217 19:21:16.205424 8502 docker.go:234] disabling docker service ...
I1217 19:21:16.205500 8502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1217 19:21:16.222601 8502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1217 19:21:16.238813 8502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1217 19:21:16.399827 8502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1217 19:21:16.548077 8502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1217 19:21:16.565428 8502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1217 19:21:16.589616 8502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1217 19:21:16.589690 8502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1217 19:21:16.603118 8502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1217 19:21:16.603197 8502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1217 19:21:16.617064 8502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1217 19:21:16.630781 8502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1217 19:21:16.644559 8502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1217 19:21:16.658592 8502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1217 19:21:16.671764 8502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1217 19:21:16.694194 8502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1217 19:21:16.708548 8502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1217 19:21:16.720387 8502 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1217 19:21:16.720455 8502 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1217 19:21:16.745604 8502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1217 19:21:16.762000 8502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1217 19:21:16.905628 8502 ssh_runner.go:195] Run: sudo systemctl restart crio
I1217 19:21:17.039636 8502 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1217 19:21:17.039735 8502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1217 19:21:17.045593 8502 start.go:564] Will wait 60s for crictl version
I1217 19:21:17.045685 8502 ssh_runner.go:195] Run: which crictl
I1217 19:21:17.050292 8502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1217 19:21:17.088112 8502 start.go:580] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1217 19:21:17.088263 8502 ssh_runner.go:195] Run: crio --version
I1217 19:21:17.118813 8502 ssh_runner.go:195] Run: crio --version
I1217 19:21:17.152495 8502 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
I1217 19:21:17.156865 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:17.157285 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:17.157311 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:17.157586 8502 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1217 19:21:17.163055 8502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1217 19:21:17.180783 8502 kubeadm.go:884] updating cluster {Name:addons-886556 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
3 ClusterName:addons-886556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1217 19:21:17.180890 8502 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
I1217 19:21:17.180930 8502 ssh_runner.go:195] Run: sudo crictl images --output json
I1217 19:21:17.215245 8502 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
I1217 19:21:17.215325 8502 ssh_runner.go:195] Run: which lz4
I1217 19:21:17.220214 8502 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1217 19:21:17.225745 8502 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1217 19:21:17.225789 8502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340314847 bytes)
I1217 19:21:18.544953 8502 crio.go:462] duration metric: took 1.324813392s to copy over tarball
I1217 19:21:18.545026 8502 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1217 19:21:20.094525 8502 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.549464855s)
I1217 19:21:20.094586 8502 crio.go:469] duration metric: took 1.549604367s to extract the tarball
I1217 19:21:20.094594 8502 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1217 19:21:20.131704 8502 ssh_runner.go:195] Run: sudo crictl images --output json
I1217 19:21:20.170657 8502 crio.go:514] all images are preloaded for cri-o runtime.
I1217 19:21:20.170682 8502 cache_images.go:86] Images are preloaded, skipping loading
I1217 19:21:20.170690 8502 kubeadm.go:935] updating node { 192.168.39.92 8443 v1.34.3 crio true true} ...
I1217 19:21:20.170766 8502 kubeadm.go:947] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-886556 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
[Install]
config:
{KubernetesVersion:v1.34.3 ClusterName:addons-886556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1217 19:21:20.170830 8502 ssh_runner.go:195] Run: crio config
I1217 19:21:20.218630 8502 cni.go:84] Creating CNI manager for ""
I1217 19:21:20.218702 8502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1217 19:21:20.218737 8502 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1217 19:21:20.218784 8502 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.92 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-886556 NodeName:addons-886556 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1217 19:21:20.219074 8502 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.92
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-886556"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.92"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.92"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1217 19:21:20.219176 8502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
I1217 19:21:20.231145 8502 binaries.go:51] Found k8s binaries, skipping transfer
I1217 19:21:20.231200 8502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1217 19:21:20.242397 8502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I1217 19:21:20.262036 8502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1217 19:21:20.281052 8502 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
I1217 19:21:20.300659 8502 ssh_runner.go:195] Run: grep 192.168.39.92 control-plane.minikube.internal$ /etc/hosts
I1217 19:21:20.304712 8502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.92 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1217 19:21:20.318847 8502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1217 19:21:20.463640 8502 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1217 19:21:20.499965 8502 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556 for IP: 192.168.39.92
I1217 19:21:20.499986 8502 certs.go:195] generating shared ca certs ...
I1217 19:21:20.500000 8502 certs.go:227] acquiring lock for ca certs: {Name:mka9d751f3e3cbcb654d1f1d24f2b10b27bc58a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 19:21:20.500140 8502 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-3611/.minikube/ca.key
I1217 19:21:20.531735 8502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-3611/.minikube/ca.crt ...
I1217 19:21:20.531762 8502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/ca.crt: {Name:mke133978246d86d25f83680d056f0becec00cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 19:21:20.531909 8502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-3611/.minikube/ca.key ...
I1217 19:21:20.531919 8502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/ca.key: {Name:mk3bbb3a281ad4113e29b15cfc9da235007f0c14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 19:21:20.531989 8502 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-3611/.minikube/proxy-client-ca.key
I1217 19:21:20.712328 8502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-3611/.minikube/proxy-client-ca.crt ...
I1217 19:21:20.712358 8502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/proxy-client-ca.crt: {Name:mk80d8a99bde89b8a4c0aed125150a55ea9e10ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 19:21:20.712506 8502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-3611/.minikube/proxy-client-ca.key ...
I1217 19:21:20.712516 8502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/proxy-client-ca.key: {Name:mk6af4243fb1605159c5504c82735178cd145803 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 19:21:20.712602 8502 certs.go:257] generating profile certs ...
I1217 19:21:20.712652 8502 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.key
I1217 19:21:20.712674 8502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt with IP's: []
I1217 19:21:20.850457 8502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt ...
I1217 19:21:20.850484 8502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.crt: {Name:mk2fedc6adf0d18a3c89d248e468613ff49b6202 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 19:21:20.850655 8502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.key ...
I1217 19:21:20.850667 8502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/client.key: {Name:mk9acfe2f8a697299d32b49792e0ce7628c1d91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 19:21:20.850736 8502 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/apiserver.key.7ff6a81a
I1217 19:21:20.850754 8502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/apiserver.crt.7ff6a81a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.92]
I1217 19:21:20.944063 8502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/apiserver.crt.7ff6a81a ...
I1217 19:21:20.944091 8502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/apiserver.crt.7ff6a81a: {Name:mkb89ff4d4058b0e80f7486865da6036f6c35ff5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 19:21:20.944265 8502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/apiserver.key.7ff6a81a ...
I1217 19:21:20.944278 8502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/apiserver.key.7ff6a81a: {Name:mk36d0fccdab8d7c6c0f8341e4315678b659e8b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 19:21:20.944848 8502 certs.go:382] copying /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/apiserver.crt.7ff6a81a -> /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/apiserver.crt
I1217 19:21:20.944925 8502 certs.go:386] copying /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/apiserver.key.7ff6a81a -> /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/apiserver.key
I1217 19:21:20.944975 8502 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/proxy-client.key
I1217 19:21:20.944994 8502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/proxy-client.crt with IP's: []
I1217 19:21:21.095027 8502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/proxy-client.crt ...
I1217 19:21:21.095055 8502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/proxy-client.crt: {Name:mkb7dd245a415ac8ce4cbbea9a028084ba73665c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 19:21:21.095224 8502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/proxy-client.key ...
I1217 19:21:21.095235 8502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/proxy-client.key: {Name:mkf2c7b59493f0a026c238b0cbf503cb32c7693f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 19:21:21.095410 8502 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca-key.pem (1675 bytes)
I1217 19:21:21.095445 8502 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/ca.pem (1082 bytes)
I1217 19:21:21.095470 8502 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/cert.pem (1123 bytes)
I1217 19:21:21.095492 8502 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-3611/.minikube/certs/key.pem (1679 bytes)
I1217 19:21:21.096000 8502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1217 19:21:21.129606 8502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1217 19:21:21.162180 8502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1217 19:21:21.206122 8502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1217 19:21:21.251678 8502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1217 19:21:21.287632 8502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1217 19:21:21.320086 8502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1217 19:21:21.352928 8502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/profiles/addons-886556/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1217 19:21:21.385635 8502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-3611/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1217 19:21:21.430370 8502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1217 19:21:21.452956 8502 ssh_runner.go:195] Run: openssl version
I1217 19:21:21.460120 8502 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1217 19:21:21.473269 8502 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1217 19:21:21.486569 8502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1217 19:21:21.492757 8502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:21 /usr/share/ca-certificates/minikubeCA.pem
I1217 19:21:21.492819 8502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1217 19:21:21.500824 8502 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1217 19:21:21.514298 8502 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1217 19:21:21.527929 8502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1217 19:21:21.533670 8502 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1217 19:21:21.533743 8502 kubeadm.go:401] StartCluster: {Name:addons-886556 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 C
lusterName:addons-886556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1217 19:21:21.533841 8502 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1217 19:21:21.533906 8502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1217 19:21:21.574990 8502 cri.go:89] found id: ""
I1217 19:21:21.575071 8502 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1217 19:21:21.588999 8502 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1217 19:21:21.602433 8502 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1217 19:21:21.615656 8502 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1217 19:21:21.615679 8502 kubeadm.go:158] found existing configuration files:
I1217 19:21:21.615729 8502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1217 19:21:21.629370 8502 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1217 19:21:21.629447 8502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1217 19:21:21.643815 8502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1217 19:21:21.656233 8502 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1217 19:21:21.656309 8502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1217 19:21:21.669749 8502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1217 19:21:21.682002 8502 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1217 19:21:21.682068 8502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1217 19:21:21.694936 8502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1217 19:21:21.706616 8502 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1217 19:21:21.706702 8502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1217 19:21:21.720680 8502 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1217 19:21:21.777524 8502 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
I1217 19:21:21.777721 8502 kubeadm.go:319] [preflight] Running pre-flight checks
I1217 19:21:21.889364 8502 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1217 19:21:21.889474 8502 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1217 19:21:21.889614 8502 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1217 19:21:21.906323 8502 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1217 19:21:21.909894 8502 out.go:252] - Generating certificates and keys ...
I1217 19:21:21.910012 8502 kubeadm.go:319] [certs] Using existing ca certificate authority
I1217 19:21:21.910102 8502 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1217 19:21:21.979615 8502 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1217 19:21:22.517674 8502 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1217 19:21:22.716263 8502 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1217 19:21:23.060400 8502 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1217 19:21:23.121724 8502 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1217 19:21:23.121903 8502 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-886556 localhost] and IPs [192.168.39.92 127.0.0.1 ::1]
I1217 19:21:23.175921 8502 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1217 19:21:23.176136 8502 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-886556 localhost] and IPs [192.168.39.92 127.0.0.1 ::1]
I1217 19:21:23.488972 8502 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1217 19:21:24.035548 8502 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1217 19:21:24.333932 8502 kubeadm.go:319] [certs] Generating "sa" key and public key
I1217 19:21:24.334082 8502 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1217 19:21:24.547294 8502 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1217 19:21:24.928245 8502 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1217 19:21:25.113392 8502 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1217 19:21:25.287318 8502 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1217 19:21:25.409006 8502 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1217 19:21:25.409165 8502 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1217 19:21:25.411437 8502 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1217 19:21:25.413490 8502 out.go:252] - Booting up control plane ...
I1217 19:21:25.413622 8502 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1217 19:21:25.413753 8502 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1217 19:21:25.414498 8502 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1217 19:21:25.433118 8502 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1217 19:21:25.433353 8502 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1217 19:21:25.441668 8502 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1217 19:21:25.442240 8502 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1217 19:21:25.442442 8502 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1217 19:21:25.626703 8502 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1217 19:21:25.626887 8502 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1217 19:21:26.627780 8502 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002565118s
I1217 19:21:26.630773 8502 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1217 19:21:26.630923 8502 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.92:8443/livez
I1217 19:21:26.631088 8502 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1217 19:21:26.631224 8502 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1217 19:21:29.964182 8502 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.336943937s
I1217 19:21:30.877596 8502 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.251547424s
I1217 19:21:33.626261 8502 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.001629428s
I1217 19:21:33.648707 8502 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1217 19:21:33.667735 8502 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1217 19:21:33.685733 8502 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1217 19:21:33.685934 8502 kubeadm.go:319] [mark-control-plane] Marking the node addons-886556 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1217 19:21:33.700067 8502 kubeadm.go:319] [bootstrap-token] Using token: bvjewc.pjpdbzfshg78w916
I1217 19:21:33.701493 8502 out.go:252] - Configuring RBAC rules ...
I1217 19:21:33.701676 8502 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1217 19:21:33.713204 8502 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1217 19:21:33.730449 8502 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1217 19:21:33.734681 8502 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1217 19:21:33.738742 8502 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1217 19:21:33.742953 8502 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1217 19:21:34.033739 8502 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1217 19:21:34.503426 8502 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1217 19:21:35.031774 8502 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1217 19:21:35.032650 8502 kubeadm.go:319]
I1217 19:21:35.032745 8502 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1217 19:21:35.032756 8502 kubeadm.go:319]
I1217 19:21:35.032838 8502 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1217 19:21:35.032848 8502 kubeadm.go:319]
I1217 19:21:35.032897 8502 kubeadm.go:319] mkdir -p $HOME/.kube
I1217 19:21:35.032990 8502 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1217 19:21:35.033051 8502 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1217 19:21:35.033067 8502 kubeadm.go:319]
I1217 19:21:35.033121 8502 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1217 19:21:35.033128 8502 kubeadm.go:319]
I1217 19:21:35.033168 8502 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1217 19:21:35.033172 8502 kubeadm.go:319]
I1217 19:21:35.033216 8502 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1217 19:21:35.033285 8502 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1217 19:21:35.033350 8502 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1217 19:21:35.033359 8502 kubeadm.go:319]
I1217 19:21:35.033445 8502 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1217 19:21:35.033614 8502 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1217 19:21:35.033635 8502 kubeadm.go:319]
I1217 19:21:35.033739 8502 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bvjewc.pjpdbzfshg78w916 \
I1217 19:21:35.033869 8502 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:dc326feeb8e3fcc0b2a801c12465db03b3f763bf73e8e9492b30fdc056a1ecc4 \
I1217 19:21:35.033907 8502 kubeadm.go:319] --control-plane
I1217 19:21:35.033917 8502 kubeadm.go:319]
I1217 19:21:35.034021 8502 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1217 19:21:35.034030 8502 kubeadm.go:319]
I1217 19:21:35.034149 8502 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bvjewc.pjpdbzfshg78w916 \
I1217 19:21:35.034242 8502 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:dc326feeb8e3fcc0b2a801c12465db03b3f763bf73e8e9492b30fdc056a1ecc4
I1217 19:21:35.035642 8502 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1217 19:21:35.035682 8502 cni.go:84] Creating CNI manager for ""
I1217 19:21:35.035697 8502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1217 19:21:35.037579 8502 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1217 19:21:35.038956 8502 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1217 19:21:35.053434 8502 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1217 19:21:35.080131 8502 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1217 19:21:35.080209 8502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 19:21:35.080231 8502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-886556 minikube.k8s.io/updated_at=2025_12_17T19_21_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924 minikube.k8s.io/name=addons-886556 minikube.k8s.io/primary=true
I1217 19:21:35.247395 8502 ops.go:34] apiserver oom_adj: -16
I1217 19:21:35.247512 8502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 19:21:35.748116 8502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 19:21:36.247651 8502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 19:21:36.747794 8502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 19:21:37.248201 8502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 19:21:37.747686 8502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 19:21:38.248195 8502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 19:21:38.747818 8502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 19:21:39.248596 8502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 19:21:39.747770 8502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 19:21:40.247929 8502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 19:21:40.472819 8502 kubeadm.go:1114] duration metric: took 5.392670474s to wait for elevateKubeSystemPrivileges
I1217 19:21:40.472860 8502 kubeadm.go:403] duration metric: took 18.93912387s to StartCluster
I1217 19:21:40.472880 8502 settings.go:142] acquiring lock: {Name:mke3c622f98fffe95e3e848232032c1bad05dc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 19:21:40.473034 8502 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22186-3611/kubeconfig
I1217 19:21:40.473370 8502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-3611/kubeconfig: {Name:mk319ed0207c46a4a2ae4d9b320056846508447c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 19:21:40.473575 8502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1217 19:21:40.473650 8502 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
I1217 19:21:40.473782 8502 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1217 19:21:40.473944 8502 addons.go:70] Setting yakd=true in profile "addons-886556"
I1217 19:21:40.473933 8502 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-886556"
I1217 19:21:40.473956 8502 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-886556"
I1217 19:21:40.473960 8502 addons.go:70] Setting cloud-spanner=true in profile "addons-886556"
I1217 19:21:40.473971 8502 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-886556"
I1217 19:21:40.473990 8502 addons.go:70] Setting registry=true in profile "addons-886556"
I1217 19:21:40.473994 8502 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-886556"
I1217 19:21:40.474004 8502 addons.go:239] Setting addon registry=true in "addons-886556"
I1217 19:21:40.474008 8502 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-886556"
I1217 19:21:40.474016 8502 addons.go:70] Setting default-storageclass=true in profile "addons-886556"
I1217 19:21:40.474023 8502 addons.go:239] Setting addon cloud-spanner=true in "addons-886556"
I1217 19:21:40.474032 8502 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-886556"
I1217 19:21:40.474041 8502 host.go:66] Checking if "addons-886556" exists ...
I1217 19:21:40.474041 8502 host.go:66] Checking if "addons-886556" exists ...
I1217 19:21:40.474045 8502 host.go:66] Checking if "addons-886556" exists ...
I1217 19:21:40.474051 8502 addons.go:70] Setting registry-creds=true in profile "addons-886556"
I1217 19:21:40.474063 8502 addons.go:239] Setting addon registry-creds=true in "addons-886556"
I1217 19:21:40.474080 8502 host.go:66] Checking if "addons-886556" exists ...
I1217 19:21:40.474107 8502 addons.go:70] Setting ingress-dns=true in profile "addons-886556"
I1217 19:21:40.474125 8502 addons.go:239] Setting addon ingress-dns=true in "addons-886556"
I1217 19:21:40.474153 8502 host.go:66] Checking if "addons-886556" exists ...
I1217 19:21:40.474621 8502 addons.go:70] Setting inspektor-gadget=true in profile "addons-886556"
I1217 19:21:40.474639 8502 addons.go:239] Setting addon inspektor-gadget=true in "addons-886556"
I1217 19:21:40.474669 8502 host.go:66] Checking if "addons-886556" exists ...
I1217 19:21:40.474909 8502 addons.go:70] Setting metrics-server=true in profile "addons-886556"
I1217 19:21:40.474935 8502 addons.go:239] Setting addon metrics-server=true in "addons-886556"
I1217 19:21:40.474963 8502 host.go:66] Checking if "addons-886556" exists ...
I1217 19:21:40.475142 8502 addons.go:70] Setting storage-provisioner=true in profile "addons-886556"
I1217 19:21:40.475160 8502 addons.go:239] Setting addon storage-provisioner=true in "addons-886556"
I1217 19:21:40.475182 8502 host.go:66] Checking if "addons-886556" exists ...
I1217 19:21:40.473948 8502 config.go:182] Loaded profile config "addons-886556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 19:21:40.475284 8502 addons.go:70] Setting gcp-auth=true in profile "addons-886556"
I1217 19:21:40.475301 8502 addons.go:70] Setting volcano=true in profile "addons-886556"
I1217 19:21:40.475315 8502 addons.go:70] Setting ingress=true in profile "addons-886556"
I1217 19:21:40.474041 8502 host.go:66] Checking if "addons-886556" exists ...
I1217 19:21:40.475328 8502 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-886556"
I1217 19:21:40.475331 8502 addons.go:239] Setting addon ingress=true in "addons-886556"
I1217 19:21:40.475341 8502 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-886556"
I1217 19:21:40.475363 8502 host.go:66] Checking if "addons-886556" exists ...
I1217 19:21:40.473977 8502 addons.go:239] Setting addon yakd=true in "addons-886556"
I1217 19:21:40.475848 8502 host.go:66] Checking if "addons-886556" exists ...
I1217 19:21:40.476126 8502 addons.go:70] Setting volumesnapshots=true in profile "addons-886556"
I1217 19:21:40.476149 8502 addons.go:239] Setting addon volumesnapshots=true in "addons-886556"
I1217 19:21:40.476178 8502 host.go:66] Checking if "addons-886556" exists ...
I1217 19:21:40.474005 8502 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-886556"
I1217 19:21:40.476376 8502 host.go:66] Checking if "addons-886556" exists ...
I1217 19:21:40.475306 8502 mustload.go:66] Loading cluster: addons-886556
I1217 19:21:40.476427 8502 out.go:179] * Verifying Kubernetes components...
I1217 19:21:40.475319 8502 addons.go:239] Setting addon volcano=true in "addons-886556"
I1217 19:21:40.476619 8502 host.go:66] Checking if "addons-886556" exists ...
I1217 19:21:40.476639 8502 config.go:182] Loaded profile config "addons-886556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 19:21:40.478152 8502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1217 19:21:40.482793 8502 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1217 19:21:40.482809 8502 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1217 19:21:40.482847 8502 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1217 19:21:40.482843 8502 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
I1217 19:21:40.483732 8502 addons.go:239] Setting addon default-storageclass=true in "addons-886556"
I1217 19:21:40.483780 8502 host.go:66] Checking if "addons-886556" exists ...
I1217 19:21:40.484315 8502 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
I1217 19:21:40.484397 8502 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1217 19:21:40.484747 8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1217 19:21:40.484403 8502 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1217 19:21:40.484790 8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1217 19:21:40.484321 8502 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
I1217 19:21:40.484406 8502 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1217 19:21:40.484964 8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1217 19:21:40.485105 8502 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1217 19:21:40.485346 8502 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-886556"
I1217 19:21:40.485521 8502 host.go:66] Checking if "addons-886556" exists ...
I1217 19:21:40.485976 8502 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
I1217 19:21:40.485992 8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1217 19:21:40.486716 8502 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1217 19:21:40.486821 8502 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
I1217 19:21:40.487109 8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
I1217 19:21:40.486724 8502 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1217 19:21:40.487271 8502 host.go:66] Checking if "addons-886556" exists ...
W1217 19:21:40.487613 8502 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1217 19:21:40.487754 8502 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1217 19:21:40.487765 8502 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1217 19:21:40.487757 8502 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.6
I1217 19:21:40.487782 8502 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1217 19:21:40.488636 8502 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1217 19:21:40.488760 8502 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1217 19:21:40.489045 8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1217 19:21:40.487822 8502 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
I1217 19:21:40.487784 8502 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1217 19:21:40.489786 8502 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1217 19:21:40.489372 8502 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1217 19:21:40.489803 8502 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1217 19:21:40.489815 8502 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1217 19:21:40.490200 8502 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
I1217 19:21:40.490210 8502 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1217 19:21:40.490217 8502 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1217 19:21:40.490305 8502 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1217 19:21:40.490657 8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1217 19:21:40.491095 8502 out.go:179] - Using image docker.io/registry:3.0.0
I1217 19:21:40.492663 8502 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1217 19:21:40.492711 8502 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1217 19:21:40.492711 8502 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
I1217 19:21:40.492735 8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1217 19:21:40.492663 8502 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
I1217 19:21:40.493970 8502 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1217 19:21:40.494001 8502 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1217 19:21:40.495909 8502 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1217 19:21:40.496003 8502 out.go:179] - Using image docker.io/busybox:stable
I1217 19:21:40.496061 8502 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1217 19:21:40.496077 8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1217 19:21:40.496561 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.497163 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.497394 8502 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1217 19:21:40.497410 8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1217 19:21:40.498135 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.498326 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.498421 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:40.498456 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.498462 8502 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1217 19:21:40.498894 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.499173 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:40.499206 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.499267 8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
I1217 19:21:40.500080 8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
I1217 19:21:40.500123 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:40.500137 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:40.500149 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.500156 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.500593 8502 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1217 19:21:40.500692 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:40.500723 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.501099 8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
I1217 19:21:40.501451 8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
I1217 19:21:40.501930 8502 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1217 19:21:40.501810 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.501951 8502 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1217 19:21:40.502020 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.501887 8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
I1217 19:21:40.502745 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.503201 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.503373 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:40.503406 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.503552 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:40.503589 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.503787 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.503851 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:40.503879 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.503799 8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
I1217 19:21:40.504010 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.504312 8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
I1217 19:21:40.504659 8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
I1217 19:21:40.504701 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:40.504730 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.505075 8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
I1217 19:21:40.505101 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:40.505135 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.505143 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.505488 8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
I1217 19:21:40.505515 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:40.505563 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.505874 8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
I1217 19:21:40.506140 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:40.506178 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.506512 8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
I1217 19:21:40.506888 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.507344 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:40.507378 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.507564 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.507597 8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
I1217 19:21:40.507892 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.508049 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:40.508079 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.508250 8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
I1217 19:21:40.508266 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:40.508288 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:40.508501 8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
W1217 19:21:40.813649 8502 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:50364->192.168.39.92:22: read: connection reset by peer
I1217 19:21:40.813686 8502 retry.go:31] will retry after 158.466174ms: ssh: handshake failed: read tcp 192.168.39.1:50364->192.168.39.92:22: read: connection reset by peer
W1217 19:21:40.897865 8502 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:50392->192.168.39.92:22: read: connection reset by peer
I1217 19:21:40.897894 8502 retry.go:31] will retry after 206.861546ms: ssh: handshake failed: read tcp 192.168.39.1:50392->192.168.39.92:22: read: connection reset by peer
W1217 19:21:40.897945 8502 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:50406->192.168.39.92:22: read: connection reset by peer
I1217 19:21:40.897952 8502 retry.go:31] will retry after 297.072336ms: ssh: handshake failed: read tcp 192.168.39.1:50406->192.168.39.92:22: read: connection reset by peer
W1217 19:21:40.972836 8502 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
I1217 19:21:40.972871 8502 retry.go:31] will retry after 264.316513ms: ssh: handshake failed: EOF
I1217 19:21:41.679362 8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1217 19:21:41.718474 8502 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
I1217 19:21:41.718538 8502 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1217 19:21:41.726745 8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1217 19:21:41.730336 8502 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
I1217 19:21:41.730364 8502 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1217 19:21:41.735134 8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
I1217 19:21:41.737175 8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1217 19:21:41.792374 8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1217 19:21:41.801330 8502 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1217 19:21:41.801356 8502 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1217 19:21:41.851216 8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1217 19:21:41.883237 8502 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1217 19:21:41.883265 8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1217 19:21:41.925208 8502 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.451598746s)
I1217 19:21:41.925277 8502 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.447094384s)
I1217 19:21:41.925362 8502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1217 19:21:41.925371 8502 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1217 19:21:41.973415 8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1217 19:21:42.085214 8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1217 19:21:42.105256 8502 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
I1217 19:21:42.105283 8502 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1217 19:21:42.143787 8502 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
I1217 19:21:42.143809 8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1217 19:21:42.146900 8502 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1217 19:21:42.146920 8502 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1217 19:21:42.163409 8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1217 19:21:42.237062 8502 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1217 19:21:42.237094 8502 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1217 19:21:42.259793 8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1217 19:21:42.260571 8502 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1217 19:21:42.260592 8502 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1217 19:21:42.406120 8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1217 19:21:42.408799 8502 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1217 19:21:42.408826 8502 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1217 19:21:42.429065 8502 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
I1217 19:21:42.429088 8502 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1217 19:21:42.509602 8502 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1217 19:21:42.509630 8502 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1217 19:21:42.524293 8502 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1217 19:21:42.524315 8502 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1217 19:21:42.772551 8502 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1217 19:21:42.772579 8502 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1217 19:21:42.908823 8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1217 19:21:42.923626 8502 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
I1217 19:21:42.923648 8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1217 19:21:42.985385 8502 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1217 19:21:42.985421 8502 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1217 19:21:43.234380 8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.554978122s)
I1217 19:21:43.267615 8502 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1217 19:21:43.267649 8502 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1217 19:21:43.283627 8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1217 19:21:43.337602 8502 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1217 19:21:43.337634 8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1217 19:21:43.658381 8502 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1217 19:21:43.658408 8502 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1217 19:21:43.834562 8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1217 19:21:44.126762 8502 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1217 19:21:44.126788 8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1217 19:21:44.635308 8502 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1217 19:21:44.635339 8502 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1217 19:21:44.993636 8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.266841841s)
I1217 19:21:45.129435 8502 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1217 19:21:45.129469 8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1217 19:21:45.597811 8502 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1217 19:21:45.597834 8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1217 19:21:46.022947 8502 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1217 19:21:46.022980 8502 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1217 19:21:46.523124 8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1217 19:21:47.630300 8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.89308497s)
I1217 19:21:47.630354 8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.837945861s)
I1217 19:21:47.630465 8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (5.895296003s)
I1217 19:21:48.013967 8502 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1217 19:21:48.017699 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:48.018177 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:48.018209 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:48.018393 8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
I1217 19:21:48.605673 8502 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1217 19:21:48.953080 8502 addons.go:239] Setting addon gcp-auth=true in "addons-886556"
I1217 19:21:48.953139 8502 host.go:66] Checking if "addons-886556" exists ...
I1217 19:21:48.955071 8502 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1217 19:21:48.957700 8502 main.go:143] libmachine: domain addons-886556 has defined MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:48.958167 8502 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a0:a1:59", ip: ""} in network mk-addons-886556: {Iface:virbr1 ExpiryTime:2025-12-17 20:21:13 +0000 UTC Type:0 Mac:52:54:00:a0:a1:59 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-886556 Clientid:01:52:54:00:a0:a1:59}
I1217 19:21:48.958200 8502 main.go:143] libmachine: domain addons-886556 has defined IP address 192.168.39.92 and MAC address 52:54:00:a0:a1:59 in network mk-addons-886556
I1217 19:21:48.958395 8502 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22186-3611/.minikube/machines/addons-886556/id_rsa Username:docker}
I1217 19:21:50.424100 8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.572852063s)
I1217 19:21:50.424136 8502 addons.go:495] Verifying addon ingress=true in "addons-886556"
I1217 19:21:50.424156 8502 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.498766337s)
I1217 19:21:50.424183 8502 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.49879458s)
I1217 19:21:50.424280 8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.339043218s)
I1217 19:21:50.424356 8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.260925596s)
I1217 19:21:50.424227 8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.450779052s)
I1217 19:21:50.424415 8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.164595771s)
I1217 19:21:50.424456 8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.018315091s)
I1217 19:21:50.424480 8502 addons.go:495] Verifying addon registry=true in "addons-886556"
I1217 19:21:50.424516 8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.515663844s)
I1217 19:21:50.424547 8502 addons.go:495] Verifying addon metrics-server=true in "addons-886556"
I1217 19:21:50.424607 8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.140944914s)
I1217 19:21:50.424183 8502 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1217 19:21:50.425043 8502 node_ready.go:35] waiting up to 6m0s for node "addons-886556" to be "Ready" ...
I1217 19:21:50.425732 8502 out.go:179] * Verifying ingress addon...
I1217 19:21:50.426658 8502 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-886556 service yakd-dashboard -n yakd-dashboard
I1217 19:21:50.426682 8502 out.go:179] * Verifying registry addon...
I1217 19:21:50.428275 8502 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1217 19:21:50.428898 8502 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1217 19:21:50.483977 8502 node_ready.go:49] node "addons-886556" is "Ready"
I1217 19:21:50.484006 8502 node_ready.go:38] duration metric: took 58.936702ms for node "addons-886556" to be "Ready" ...
I1217 19:21:50.484027 8502 api_server.go:52] waiting for apiserver process to appear ...
I1217 19:21:50.484090 8502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1217 19:21:50.558240 8502 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1217 19:21:50.558259 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:21:50.564938 8502 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1217 19:21:50.564957 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:21:50.973795 8502 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-886556" context rescaled to 1 replicas
I1217 19:21:50.976036 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:21:50.976221 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:21:50.984688 8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.150087338s)
W1217 19:21:50.984733 8502 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1217 19:21:50.984755 8502 retry.go:31] will retry after 263.9708ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1217 19:21:51.249819 8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1217 19:21:51.441437 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:21:51.444567 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:21:51.831016 8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.307833605s)
I1217 19:21:51.831054 8502 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.875953123s)
I1217 19:21:51.831066 8502 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-886556"
I1217 19:21:51.831103 8502 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.346996092s)
I1217 19:21:51.831121 8502 api_server.go:72] duration metric: took 11.357437459s to wait for apiserver process to appear ...
I1217 19:21:51.831134 8502 api_server.go:88] waiting for apiserver healthz status ...
I1217 19:21:51.831316 8502 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
I1217 19:21:51.832519 8502 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1217 19:21:51.832554 8502 out.go:179] * Verifying csi-hostpath-driver addon...
I1217 19:21:51.833756 8502 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1217 19:21:51.834606 8502 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1217 19:21:51.834937 8502 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1217 19:21:51.834952 8502 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1217 19:21:51.878563 8502 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1217 19:21:51.878586 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:21:51.888739 8502 api_server.go:279] https://192.168.39.92:8443/healthz returned 200:
ok
I1217 19:21:51.896070 8502 api_server.go:141] control plane version: v1.34.3
I1217 19:21:51.896104 8502 api_server.go:131] duration metric: took 64.834878ms to wait for apiserver health ...
I1217 19:21:51.896112 8502 system_pods.go:43] waiting for kube-system pods to appear ...
I1217 19:21:51.935765 8502 system_pods.go:59] 20 kube-system pods found
I1217 19:21:51.935846 8502 system_pods.go:61] "amd-gpu-device-plugin-z6w8r" [1dbe0a3c-a1f6-46e6-beac-d8931e039819] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1217 19:21:51.935858 8502 system_pods.go:61] "coredns-66bc5c9577-bgtrc" [96c9cfe3-ccd5-4697-8f1b-a72ebef1425b] Running
I1217 19:21:51.935866 8502 system_pods.go:61] "coredns-66bc5c9577-xndpj" [cadb243f-ae46-400c-8188-a780a9a4974f] Running
I1217 19:21:51.935874 8502 system_pods.go:61] "csi-hostpath-attacher-0" [585eb515-b0dc-4a5e-a272-1a0541460d7d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1217 19:21:51.935887 8502 system_pods.go:61] "csi-hostpath-resizer-0" [b286d59e-b1f1-43e0-95f4-45423fecf6d6] Pending
I1217 19:21:51.935898 8502 system_pods.go:61] "csi-hostpathplugin-6fj9g" [97f5d123-7341-4ca5-9f44-39d65d8a4a4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1217 19:21:51.935911 8502 system_pods.go:61] "etcd-addons-886556" [d8286b3c-24af-4b3e-8fb6-f96c18635f73] Running
I1217 19:21:51.935918 8502 system_pods.go:61] "kube-apiserver-addons-886556" [74777e79-dac2-44c2-9c7c-dd2f363fe062] Running
I1217 19:21:51.935923 8502 system_pods.go:61] "kube-controller-manager-addons-886556" [cace1c52-4336-4fb0-8de2-26bd11dc3ac8] Running
I1217 19:21:51.935935 8502 system_pods.go:61] "kube-ingress-dns-minikube" [665e2f71-8383-415a-89ea-cb281553dc9e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1217 19:21:51.935946 8502 system_pods.go:61] "kube-proxy-tmm7b" [1dcd502e-bfdd-41d4-911e-b8cb873ebb8c] Running
I1217 19:21:51.935953 8502 system_pods.go:61] "kube-scheduler-addons-886556" [e4e24a77-0291-4ac3-a317-13537ba593ad] Running
I1217 19:21:51.935964 8502 system_pods.go:61] "metrics-server-85b7d694d7-qq7z2" [1a0a29d5-b863-4f43-8e30-20e811421d49] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1217 19:21:51.935976 8502 system_pods.go:61] "nvidia-device-plugin-daemonset-9r9hc" [687ccec9-fd49-4130-942a-adaa42174493] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1217 19:21:51.935990 8502 system_pods.go:61] "registry-6b586f9694-7vxz4" [51d280f0-5585-48ff-9878-7cdf3f790c88] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1217 19:21:51.936003 8502 system_pods.go:61] "registry-creds-764b6fb674-7jdnm" [61a01fac-adbf-4010-981c-9c91b42e786e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1217 19:21:51.936016 8502 system_pods.go:61] "registry-proxy-zf2zm" [d7cb4d26-907e-4609-8385-a07e0958bd41] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1217 19:21:51.936029 8502 system_pods.go:61] "snapshot-controller-7d9fbc56b8-96c6l" [6882de24-8733-4ef1-88d5-73ffcab02127] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1217 19:21:51.936046 8502 system_pods.go:61] "snapshot-controller-7d9fbc56b8-w7czp" [f4b470a5-b443-4c15-911f-8b4bc6ac894d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1217 19:21:51.936061 8502 system_pods.go:61] "storage-provisioner" [e51b534c-7297-4901-a6e7-63d89d9275dc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1217 19:21:51.936073 8502 system_pods.go:74] duration metric: took 39.952611ms to wait for pod list to return data ...
I1217 19:21:51.936089 8502 default_sa.go:34] waiting for default service account to be created ...
I1217 19:21:51.960460 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:21:51.963147 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:21:51.968320 8502 default_sa.go:45] found service account: "default"
I1217 19:21:51.968351 8502 default_sa.go:55] duration metric: took 32.251173ms for default service account to be created ...
I1217 19:21:51.968364 8502 system_pods.go:116] waiting for k8s-apps to be running ...
I1217 19:21:51.981850 8502 system_pods.go:86] 20 kube-system pods found
I1217 19:21:51.981890 8502 system_pods.go:89] "amd-gpu-device-plugin-z6w8r" [1dbe0a3c-a1f6-46e6-beac-d8931e039819] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1217 19:21:51.981914 8502 system_pods.go:89] "coredns-66bc5c9577-bgtrc" [96c9cfe3-ccd5-4697-8f1b-a72ebef1425b] Running
I1217 19:21:51.981923 8502 system_pods.go:89] "coredns-66bc5c9577-xndpj" [cadb243f-ae46-400c-8188-a780a9a4974f] Running
I1217 19:21:51.981930 8502 system_pods.go:89] "csi-hostpath-attacher-0" [585eb515-b0dc-4a5e-a272-1a0541460d7d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1217 19:21:51.981937 8502 system_pods.go:89] "csi-hostpath-resizer-0" [b286d59e-b1f1-43e0-95f4-45423fecf6d6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1217 19:21:51.981947 8502 system_pods.go:89] "csi-hostpathplugin-6fj9g" [97f5d123-7341-4ca5-9f44-39d65d8a4a4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1217 19:21:51.981953 8502 system_pods.go:89] "etcd-addons-886556" [d8286b3c-24af-4b3e-8fb6-f96c18635f73] Running
I1217 19:21:51.981962 8502 system_pods.go:89] "kube-apiserver-addons-886556" [74777e79-dac2-44c2-9c7c-dd2f363fe062] Running
I1217 19:21:51.981971 8502 system_pods.go:89] "kube-controller-manager-addons-886556" [cace1c52-4336-4fb0-8de2-26bd11dc3ac8] Running
I1217 19:21:51.981984 8502 system_pods.go:89] "kube-ingress-dns-minikube" [665e2f71-8383-415a-89ea-cb281553dc9e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1217 19:21:51.981994 8502 system_pods.go:89] "kube-proxy-tmm7b" [1dcd502e-bfdd-41d4-911e-b8cb873ebb8c] Running
I1217 19:21:51.982000 8502 system_pods.go:89] "kube-scheduler-addons-886556" [e4e24a77-0291-4ac3-a317-13537ba593ad] Running
I1217 19:21:51.982007 8502 system_pods.go:89] "metrics-server-85b7d694d7-qq7z2" [1a0a29d5-b863-4f43-8e30-20e811421d49] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1217 19:21:51.982016 8502 system_pods.go:89] "nvidia-device-plugin-daemonset-9r9hc" [687ccec9-fd49-4130-942a-adaa42174493] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1217 19:21:51.982028 8502 system_pods.go:89] "registry-6b586f9694-7vxz4" [51d280f0-5585-48ff-9878-7cdf3f790c88] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1217 19:21:51.982036 8502 system_pods.go:89] "registry-creds-764b6fb674-7jdnm" [61a01fac-adbf-4010-981c-9c91b42e786e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1217 19:21:51.982048 8502 system_pods.go:89] "registry-proxy-zf2zm" [d7cb4d26-907e-4609-8385-a07e0958bd41] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1217 19:21:51.982057 8502 system_pods.go:89] "snapshot-controller-7d9fbc56b8-96c6l" [6882de24-8733-4ef1-88d5-73ffcab02127] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1217 19:21:51.982070 8502 system_pods.go:89] "snapshot-controller-7d9fbc56b8-w7czp" [f4b470a5-b443-4c15-911f-8b4bc6ac894d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1217 19:21:51.982079 8502 system_pods.go:89] "storage-provisioner" [e51b534c-7297-4901-a6e7-63d89d9275dc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1217 19:21:51.982093 8502 system_pods.go:126] duration metric: took 13.721224ms to wait for k8s-apps to be running ...
I1217 19:21:51.982108 8502 system_svc.go:44] waiting for kubelet service to be running ....
I1217 19:21:51.982158 8502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1217 19:21:51.985938 8502 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1217 19:21:51.985963 8502 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1217 19:21:52.099803 8502 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1217 19:21:52.099832 8502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1217 19:21:52.152451 8502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1217 19:21:52.344609 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:21:52.443598 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:21:52.443638 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:21:52.840415 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:21:52.936447 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:21:52.937459 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:21:53.209718 8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.95985204s)
I1217 19:21:53.209741 8502 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.227562478s)
I1217 19:21:53.209778 8502 system_svc.go:56] duration metric: took 1.227665633s WaitForService to wait for kubelet
I1217 19:21:53.209793 8502 kubeadm.go:587] duration metric: took 12.736107872s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1217 19:21:53.209819 8502 node_conditions.go:102] verifying NodePressure condition ...
I1217 19:21:53.217219 8502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1217 19:21:53.217249 8502 node_conditions.go:123] node cpu capacity is 2
I1217 19:21:53.217266 8502 node_conditions.go:105] duration metric: took 7.440359ms to run NodePressure ...
I1217 19:21:53.217280 8502 start.go:242] waiting for startup goroutines ...
I1217 19:21:53.359621 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:21:53.470918 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:21:53.477784 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:21:53.722863 8502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.570354933s)
I1217 19:21:53.724055 8502 addons.go:495] Verifying addon gcp-auth=true in "addons-886556"
I1217 19:21:53.726959 8502 out.go:179] * Verifying gcp-auth addon...
I1217 19:21:53.729079 8502 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1217 19:21:53.753665 8502 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1217 19:21:53.753687 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:21:53.854963 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:21:53.938959 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:21:53.942951 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:21:54.234428 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:21:54.359344 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:21:54.460350 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:21:54.461013 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:21:54.733520 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:21:54.839864 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:21:54.932075 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:21:54.939217 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:21:55.233635 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:21:55.339202 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:21:55.434004 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:21:55.434077 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:21:55.733429 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:21:55.839443 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:21:55.933204 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:21:55.934074 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:21:56.237241 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:21:56.353238 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:21:56.434742 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:21:56.437459 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:21:56.745225 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:21:56.839658 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:21:56.944721 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:21:56.945237 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:21:57.234721 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:21:57.339871 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:21:57.440356 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:21:57.440539 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:21:57.733084 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:21:57.839989 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:21:57.935065 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:21:57.940764 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:21:58.239269 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:21:58.342930 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:21:58.432709 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:21:58.433916 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:21:58.735515 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:21:58.841631 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:21:58.936112 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:21:58.936320 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:21:59.234218 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:21:59.343279 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:21:59.438091 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:21:59.440386 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:21:59.732547 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:21:59.840134 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:21:59.933365 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:21:59.933373 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:00.233349 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:00.342425 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:00.439813 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:00.440101 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:00.734020 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:00.839469 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:00.932190 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:00.934915 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:01.232557 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:01.339359 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:01.432551 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:01.433814 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:01.734509 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:01.840710 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:01.932920 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:01.933794 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:02.233710 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:02.339557 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:02.432713 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:02.433224 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:02.732781 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:02.839392 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:02.933916 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:02.934063 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:03.232970 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:03.341994 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:03.435346 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:03.435676 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:03.734482 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:03.839926 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:03.933699 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:03.934493 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:04.234320 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:04.342220 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:04.434925 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:04.434992 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:04.733513 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:04.840999 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:04.932705 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:04.932984 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:05.253664 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:05.338808 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:05.434935 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:05.435003 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:05.733326 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:05.844396 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:05.933142 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:05.933460 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:06.234664 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:06.338667 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:06.433265 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:06.434253 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:06.733174 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:06.845961 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:06.938661 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:06.939060 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:07.235694 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:07.339013 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:07.432299 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:07.433659 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:07.733774 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:07.839003 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:07.933823 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:07.933923 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:08.234274 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:08.339969 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:08.433478 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:08.433724 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:08.733629 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:08.837948 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:08.938825 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:08.939021 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:09.232935 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:09.339058 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:09.433163 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:09.433272 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:09.733331 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:09.839735 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:09.933966 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:09.935392 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:10.236959 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:10.342728 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:10.437995 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:10.443821 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:10.734744 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:10.841991 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:10.937125 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:10.938731 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:11.237817 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:11.340551 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:11.437933 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:11.440663 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:11.736778 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:11.845115 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:11.934953 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:11.936435 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:12.238155 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:12.340565 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:12.431946 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:12.434831 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:12.737283 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:12.838913 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:12.939424 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:12.939751 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:13.238685 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:13.441085 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:13.449590 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:13.452486 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:13.736704 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:13.840293 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:13.937557 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:13.938875 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:14.235284 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:14.341809 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:14.436231 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:14.440004 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:14.733570 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:14.840735 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:15.034005 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:15.035598 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:15.234183 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:15.340704 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:15.431909 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:15.435538 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:15.736474 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:15.840515 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:15.933005 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:15.938668 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:16.233979 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:16.342114 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:16.432422 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:16.435759 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:16.735389 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:16.839762 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:16.937260 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:16.939115 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:17.238960 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:17.348944 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:17.433254 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:17.434937 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:17.736566 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:17.838369 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:17.935563 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:17.935879 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:18.233492 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:18.340626 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:18.434051 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:18.434334 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:18.732657 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:18.839133 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:18.933541 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:18.936017 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:19.233611 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:19.338855 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:19.433259 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:19.434379 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:19.732511 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:19.839705 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:19.932944 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:19.933232 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:20.232837 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:20.341244 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:20.435711 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:20.437837 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:20.736475 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:20.839724 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:20.935829 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:20.937461 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:21.234269 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:21.341469 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:21.437099 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:21.440897 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:21.735004 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:21.841716 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:21.940646 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:21.940939 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:22.232343 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:22.340129 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:22.432276 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:22.432710 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:22.734589 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:22.839238 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:22.932925 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:22.934490 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:23.232689 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:23.338508 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:23.432757 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:23.433174 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:23.734012 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:23.838707 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:23.932150 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:23.932776 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:24.233789 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:24.341026 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:24.437247 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:24.437432 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:24.734919 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:24.839464 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:24.933604 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:24.935175 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:25.234988 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:25.339402 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:25.432024 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:25.434225 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:25.736607 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:25.840739 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:25.935361 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:25.935620 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:26.234158 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:26.339875 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:26.433218 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:26.433855 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:26.735895 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:26.840603 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:26.932295 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:26.934268 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:27.235743 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:27.341244 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:27.434626 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 19:22:27.435665 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:27.736117 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:27.842335 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:27.934880 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:27.935803 8502 kapi.go:107] duration metric: took 37.506902072s to wait for kubernetes.io/minikube-addons=registry ...
I1217 19:22:28.234127 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:28.339121 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:28.434024 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:28.816605 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:28.891642 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:28.934141 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:29.232768 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:29.338330 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:29.432676 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:29.743187 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:29.843363 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:29.932432 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:30.232655 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:30.338067 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:30.432676 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:30.733553 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:30.839874 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:30.932356 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:31.232304 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:31.342822 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:31.435243 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:31.732598 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:31.842359 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:31.932837 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:32.238002 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:32.339102 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:32.435168 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:32.734015 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:32.839421 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:32.935248 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:33.236904 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:33.346595 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:33.442620 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:33.733112 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:33.840716 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:33.934050 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:34.233131 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:34.341156 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:34.434741 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:34.732995 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:34.838643 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:34.931699 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:35.233370 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:35.339623 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:35.432716 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:35.732595 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:35.839689 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:35.940232 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:36.232781 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:36.338921 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:36.432063 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:36.731700 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:36.842769 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:36.932010 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:37.234221 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:37.340400 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:37.526396 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:37.734800 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:37.839130 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:37.933439 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:38.233027 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:38.340464 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:38.432622 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:38.733191 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:38.840639 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:38.937357 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:39.233454 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:39.339701 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:39.432033 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:39.735320 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:39.840189 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:39.935291 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:40.232945 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:40.339237 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:40.434853 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:40.731862 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:40.838476 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:40.932039 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:41.231961 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:41.338863 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:41.437765 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:41.734607 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:41.838149 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:41.932687 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:42.233077 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:42.339168 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:42.432368 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:42.735585 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:42.838269 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:42.933517 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:43.235990 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:43.561818 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:43.561966 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:43.737120 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:43.838881 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:43.933152 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:44.236804 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:44.341541 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:44.434170 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:44.734185 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:44.839901 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:44.931988 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:45.233205 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:45.340500 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:45.431786 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:45.735962 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:45.840675 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:45.932856 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:46.234945 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:46.339034 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:46.433919 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:46.734783 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:46.841406 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:46.934957 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:47.238126 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:47.339038 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:47.434012 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:47.738904 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:47.839043 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:47.937312 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:48.236251 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:48.339475 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:48.433429 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:48.735000 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:48.840588 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:48.934898 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:49.234420 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:49.349421 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:49.433891 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:49.735340 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:49.839405 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:49.932911 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:50.234152 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:50.339686 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:50.432672 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:50.734482 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:50.846160 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:50.935701 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:51.232518 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:51.343432 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:51.436858 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:51.734035 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:51.842425 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:51.942305 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:52.233390 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:52.350383 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:52.436573 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:52.734604 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:52.842618 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:52.937124 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:53.569781 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:53.569895 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:53.570824 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:53.733647 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:53.842656 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:53.944145 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:54.234979 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:54.349480 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:54.433768 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:54.735417 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:54.840875 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:54.934672 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:55.238241 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:55.340475 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:55.443641 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:55.735092 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:55.842457 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:55.935969 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:56.235859 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:56.338587 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:56.433367 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:56.735074 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:56.839619 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:56.931821 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:57.233705 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:57.341845 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:57.437693 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:57.732796 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:57.839014 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:57.934645 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:58.235982 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:58.340983 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:58.435006 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:58.751880 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:59.024243 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:59.027423 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:59.235133 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:59.338729 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:59.431668 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:22:59.732998 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:22:59.841991 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:22:59.939014 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:23:00.235840 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:23:00.339160 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:00.432970 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:23:00.733043 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:23:00.838852 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:00.932938 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:23:01.233725 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:23:01.337949 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:01.432843 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:23:01.738044 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:23:01.842212 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:02.162095 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:23:02.233736 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:23:02.338913 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:02.434474 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:23:02.733433 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:23:02.840399 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:02.932380 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:23:03.236199 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:23:03.358437 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:03.433309 8502 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 19:23:03.733313 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:23:03.839945 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:03.934510 8502 kapi.go:107] duration metric: took 1m13.506231166s to wait for app.kubernetes.io/name=ingress-nginx ...
I1217 19:23:04.235692 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:23:04.340565 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:04.735287 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:23:04.840630 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:05.233565 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:23:05.340006 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:05.733216 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:23:05.839544 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:06.233813 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:23:06.340424 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:06.733835 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:23:06.847951 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:07.236307 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:23:07.339192 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:07.735346 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:23:07.841731 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:08.234209 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:23:08.341047 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:08.732889 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:23:08.839965 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:09.234331 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 19:23:09.341597 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:09.734074 8502 kapi.go:107] duration metric: took 1m16.004994998s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1217 19:23:09.735916 8502 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-886556 cluster.
I1217 19:23:09.737437 8502 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1217 19:23:09.738904 8502 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1217 19:23:09.841191 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:10.343089 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:10.842783 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:11.341925 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:11.841220 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:12.340015 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:12.839452 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:13.341158 8502 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 19:23:13.839177 8502 kapi.go:107] duration metric: took 1m22.004570276s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1217 19:23:13.841161 8502 out.go:179] * Enabled addons: default-storageclass, cloud-spanner, storage-provisioner, amd-gpu-device-plugin, inspektor-gadget, ingress-dns, registry-creds, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
I1217 19:23:13.842419 8502 addons.go:530] duration metric: took 1m33.368643369s for enable addons: enabled=[default-storageclass cloud-spanner storage-provisioner amd-gpu-device-plugin inspektor-gadget ingress-dns registry-creds nvidia-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
I1217 19:23:13.842475 8502 start.go:247] waiting for cluster config update ...
I1217 19:23:13.842504 8502 start.go:256] writing updated cluster config ...
I1217 19:23:13.842825 8502 ssh_runner.go:195] Run: rm -f paused
I1217 19:23:13.853136 8502 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1217 19:23:13.860762 8502 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xndpj" in "kube-system" namespace to be "Ready" or be gone ...
I1217 19:23:13.869825 8502 pod_ready.go:94] pod "coredns-66bc5c9577-xndpj" is "Ready"
I1217 19:23:13.869853 8502 pod_ready.go:86] duration metric: took 9.058747ms for pod "coredns-66bc5c9577-xndpj" in "kube-system" namespace to be "Ready" or be gone ...
I1217 19:23:13.873268 8502 pod_ready.go:83] waiting for pod "etcd-addons-886556" in "kube-system" namespace to be "Ready" or be gone ...
I1217 19:23:13.881166 8502 pod_ready.go:94] pod "etcd-addons-886556" is "Ready"
I1217 19:23:13.881199 8502 pod_ready.go:86] duration metric: took 7.898744ms for pod "etcd-addons-886556" in "kube-system" namespace to be "Ready" or be gone ...
I1217 19:23:13.884855 8502 pod_ready.go:83] waiting for pod "kube-apiserver-addons-886556" in "kube-system" namespace to be "Ready" or be gone ...
I1217 19:23:13.894692 8502 pod_ready.go:94] pod "kube-apiserver-addons-886556" is "Ready"
I1217 19:23:13.894720 8502 pod_ready.go:86] duration metric: took 9.839755ms for pod "kube-apiserver-addons-886556" in "kube-system" namespace to be "Ready" or be gone ...
I1217 19:23:13.898037 8502 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-886556" in "kube-system" namespace to be "Ready" or be gone ...
I1217 19:23:14.259100 8502 pod_ready.go:94] pod "kube-controller-manager-addons-886556" is "Ready"
I1217 19:23:14.259131 8502 pod_ready.go:86] duration metric: took 361.065761ms for pod "kube-controller-manager-addons-886556" in "kube-system" namespace to be "Ready" or be gone ...
I1217 19:23:14.459320 8502 pod_ready.go:83] waiting for pod "kube-proxy-tmm7b" in "kube-system" namespace to be "Ready" or be gone ...
I1217 19:23:14.858358 8502 pod_ready.go:94] pod "kube-proxy-tmm7b" is "Ready"
I1217 19:23:14.858385 8502 pod_ready.go:86] duration metric: took 399.024903ms for pod "kube-proxy-tmm7b" in "kube-system" namespace to be "Ready" or be gone ...
I1217 19:23:15.058685 8502 pod_ready.go:83] waiting for pod "kube-scheduler-addons-886556" in "kube-system" namespace to be "Ready" or be gone ...
I1217 19:23:15.458699 8502 pod_ready.go:94] pod "kube-scheduler-addons-886556" is "Ready"
I1217 19:23:15.458728 8502 pod_ready.go:86] duration metric: took 400.011797ms for pod "kube-scheduler-addons-886556" in "kube-system" namespace to be "Ready" or be gone ...
I1217 19:23:15.458742 8502 pod_ready.go:40] duration metric: took 1.605568743s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1217 19:23:15.509910 8502 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
I1217 19:23:15.512545 8502 out.go:179] * Done! kubectl is now configured to use "addons-886556" cluster and "default" namespace by default
==> CRI-O <==
Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.011349817Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765999588011285431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:551113,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=10dcc2ed-e717-4404-ab4d-f56620530ef8 name=/runtime.v1.ImageService/ImageFsInfo
Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.014255974Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=663b610e-1c76-40a8-be12-009040d7a141 name=/runtime.v1.RuntimeService/ListContainers
Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.014635338Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=663b610e-1c76-40a8-be12-009040d7a141 name=/runtime.v1.RuntimeService/ListContainers
Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.015605767Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:509c2e34906e3c016d03d51db722a340d140c0ed93e7fe3c711e9850b6570161,PodSandboxId:ab87d79e904d52cec8a616a57e0a377a4236bcc9340e467029894f8b9bb3a395,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765999445844268114,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6dccff02-c09a-4293-83a1-fd22a7c40b8c,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81ab5371b23fe766fc6be499ce17f2093c0f26ec5dae9f5758074ff01194c13b,PodSandboxId:0b7413f5092011d719adf5bd50f250a94a00c1c099a8e42118676aa95c1933e2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765999400971257972,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2be54a14-f7e4-4cce-a350-4f3c9438f053,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83bfed642b76ad03ba6be908e3f00f6e22893595b6905d9a972bcc02ec8db95c,PodSandboxId:4230ae39e001d85cb49e9c9db1deab46baa32a80d9c9b9ea791c3042fefd07e3,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765999382449717188,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-d7b4h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cdae934d-e441-44d9-8be3-38eda9dbad52,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f1ae3617bb443308f1cccdabefc2860f9716c1b01e9f7982834af654e5f87f1,PodSandboxId:9ae07f54f02cef14156432bdf3b38be60352efae4fa7d61f30ea4f078d6b961e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765999382401366680,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-2lds5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5a310601-c39b-42c0-a572-5471fbb24856,},Annotations:map[string]string{io.kubernetes.container.hash: 6f360
61b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cbfa46bd7651d15668ba7d18b6803cfff873f769b7a87dde5dccf615ecb8645e,PodSandboxId:5cb502d1fe0a1fb64e06f3134da1eb3b436887888664fee82b722f2df774fb3e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765999367952728755,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fg4xw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0fe75cec-73ac-48c0-81b3-fc95913a3fbd,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb1564c0fd8234f5ffc6fac2d4f1840e80da9bb871b6d578d43e86aa34bbe86,PodSandboxId:58e9f5510ce02ca6ff09f198c69741f18c5b9ee30ff8d0d5f0937c8b9d654667,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765999337624421140,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 665e2f71-8383-415a-89ea-cb281553dc9e,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842e06a810a24be894b68fbaa693a58ec50f0780b6754c852d8edbb419ae904e,PodSandboxId:c5947a163b040dfa0a7db2ee1530e171bcc0f8adf6afa8ad14f7a3247c4ff2e0,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38
35498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765999312566436038,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-z6w8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dbe0a3c-a1f6-46e6-beac-d8931e039819,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e17df536b7e48e5734a2ace40c0cfdca4505d136dadcb49d48a341c5ad44be2f,PodSandboxId:1dc437030c33e1f1b1c3f7446b75eb228f28ea94d1356064d5dd5f9cf7ae961c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765999311590572973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e51b534c-7297-4901-a6e7-63d89d9275dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08c9eb9a61ed32f25d7494c83d66ee26c80547bdfab107bdfafd06c71a2ce806,PodSandboxId:1bd82f4b4a856e37598b87b018b24c4eead12a4497a85917c3d6b85ac6a028a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a
9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765999301536234157,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xndpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cadb243f-ae46-400c-8188-a780a9a4974f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f18a26473585ecbde03c3eae1a070a8594be51992faebfb78ac9a623c2fd6e6c,PodSandboxId:609664b8e6016381ee97b9d8602bb0a177dd801ecc704bd7130ad1e285a236dc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765999300183927893,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmm7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcd502e-bfdd-41d4-911e-b8cb873ebb8c,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9f0548c6961aa8c7b3e0b8fb9bdae38d6af780451f3b6c84a7aedb37b1535f3,PodSandboxId:7d557bd8b150e0672357cad152f999aa5e279782d859fed266a443cd72e9a535,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765999287250536093,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-886556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 883b266f381f344ce15f08d1cdc57113,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e9006ec843a3c6ebc7d5a43db3e8e6d3798a906f5eae750ae071dcedce2d68,PodSandboxId:1c3372d0e8f698bbe6acbf4a19f230c2a81aebb86c68b93f563a057aaeb1fd45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765999287279063135,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-886556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1f8a4b64afdd22b1a13b05efdc91f50,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe
-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e9c28401ad79fd2540b52da660361d5bb63b6cdeeb79bbf826a753949bd7b5,PodSandboxId:354185a9c4dc542ecb18d84642e4dca83747cbba64ac2bf8693e84ccc579b684,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765999287231033799,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-886556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a77be74558da47219e6b04daea8f969,},Annotations:
map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:638eb74bc3cef930c7aea686a3049dad59b532c6928943988a51c6a42a17fd62,PodSandboxId:b561eebc576741724fb933d2adbb05606abb086448e25dc0a4c21240c6eda634,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765999287203307145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-886556,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 8c2aa17c88f5bbf01e49cd999fb78dc2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=663b610e-1c76-40a8-be12-009040d7a141 name=/runtime.v1.RuntimeService/ListContainers
Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.055709854Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=78009c01-88a6-44f1-b28f-82de801d6d1d name=/runtime.v1.RuntimeService/Version
Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.055849394Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=78009c01-88a6-44f1-b28f-82de801d6d1d name=/runtime.v1.RuntimeService/Version
Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.057811704Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f9a0ee6-3602-4437-b645-12cfddcd773a name=/runtime.v1.ImageService/ImageFsInfo
Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.059084415Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765999588059051828,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:551113,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f9a0ee6-3602-4437-b645-12cfddcd773a name=/runtime.v1.ImageService/ImageFsInfo
Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.060348594Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4654f1d0-a28d-479e-90db-15833f971e16 name=/runtime.v1.RuntimeService/ListContainers
Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.060429930Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4654f1d0-a28d-479e-90db-15833f971e16 name=/runtime.v1.RuntimeService/ListContainers
Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.060807457Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:509c2e34906e3c016d03d51db722a340d140c0ed93e7fe3c711e9850b6570161,PodSandboxId:ab87d79e904d52cec8a616a57e0a377a4236bcc9340e467029894f8b9bb3a395,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765999445844268114,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6dccff02-c09a-4293-83a1-fd22a7c40b8c,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81ab5371b23fe766fc6be499ce17f2093c0f26ec5dae9f5758074ff01194c13b,PodSandboxId:0b7413f5092011d719adf5bd50f250a94a00c1c099a8e42118676aa95c1933e2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765999400971257972,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2be54a14-f7e4-4cce-a350-4f3c9438f053,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83bfed642b76ad03ba6be908e3f00f6e22893595b6905d9a972bcc02ec8db95c,PodSandboxId:4230ae39e001d85cb49e9c9db1deab46baa32a80d9c9b9ea791c3042fefd07e3,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765999382449717188,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-d7b4h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cdae934d-e441-44d9-8be3-38eda9dbad52,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f1ae3617bb443308f1cccdabefc2860f9716c1b01e9f7982834af654e5f87f1,PodSandboxId:9ae07f54f02cef14156432bdf3b38be60352efae4fa7d61f30ea4f078d6b961e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765999382401366680,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-2lds5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5a310601-c39b-42c0-a572-5471fbb24856,},Annotations:map[string]string{io.kubernetes.container.hash: 6f360
61b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cbfa46bd7651d15668ba7d18b6803cfff873f769b7a87dde5dccf615ecb8645e,PodSandboxId:5cb502d1fe0a1fb64e06f3134da1eb3b436887888664fee82b722f2df774fb3e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765999367952728755,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fg4xw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0fe75cec-73ac-48c0-81b3-fc95913a3fbd,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb1564c0fd8234f5ffc6fac2d4f1840e80da9bb871b6d578d43e86aa34bbe86,PodSandboxId:58e9f5510ce02ca6ff09f198c69741f18c5b9ee30ff8d0d5f0937c8b9d654667,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765999337624421140,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 665e2f71-8383-415a-89ea-cb281553dc9e,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842e06a810a24be894b68fbaa693a58ec50f0780b6754c852d8edbb419ae904e,PodSandboxId:c5947a163b040dfa0a7db2ee1530e171bcc0f8adf6afa8ad14f7a3247c4ff2e0,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38
35498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765999312566436038,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-z6w8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dbe0a3c-a1f6-46e6-beac-d8931e039819,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e17df536b7e48e5734a2ace40c0cfdca4505d136dadcb49d48a341c5ad44be2f,PodSandboxId:1dc437030c33e1f1b1c3f7446b75eb228f28ea94d1356064d5dd5f9cf7ae961c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765999311590572973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e51b534c-7297-4901-a6e7-63d89d9275dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08c9eb9a61ed32f25d7494c83d66ee26c80547bdfab107bdfafd06c71a2ce806,PodSandboxId:1bd82f4b4a856e37598b87b018b24c4eead12a4497a85917c3d6b85ac6a028a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a
9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765999301536234157,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xndpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cadb243f-ae46-400c-8188-a780a9a4974f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f18a26473585ecbde03c3eae1a070a8594be51992faebfb78ac9a623c2fd6e6c,PodSandboxId:609664b8e6016381ee97b9d8602bb0a177dd801ecc704bd7130ad1e285a236dc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765999300183927893,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmm7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcd502e-bfdd-41d4-911e-b8cb873ebb8c,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9f0548c6961aa8c7b3e0b8fb9bdae38d6af780451f3b6c84a7aedb37b1535f3,PodSandboxId:7d557bd8b150e0672357cad152f999aa5e279782d859fed266a443cd72e9a535,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765999287250536093,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-886556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 883b266f381f344ce15f08d1cdc57113,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e9006ec843a3c6ebc7d5a43db3e8e6d3798a906f5eae750ae071dcedce2d68,PodSandboxId:1c3372d0e8f698bbe6acbf4a19f230c2a81aebb86c68b93f563a057aaeb1fd45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765999287279063135,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-886556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1f8a4b64afdd22b1a13b05efdc91f50,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe
-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e9c28401ad79fd2540b52da660361d5bb63b6cdeeb79bbf826a753949bd7b5,PodSandboxId:354185a9c4dc542ecb18d84642e4dca83747cbba64ac2bf8693e84ccc579b684,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765999287231033799,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-886556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a77be74558da47219e6b04daea8f969,},Annotations:
map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:638eb74bc3cef930c7aea686a3049dad59b532c6928943988a51c6a42a17fd62,PodSandboxId:b561eebc576741724fb933d2adbb05606abb086448e25dc0a4c21240c6eda634,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765999287203307145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-886556,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 8c2aa17c88f5bbf01e49cd999fb78dc2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4654f1d0-a28d-479e-90db-15833f971e16 name=/runtime.v1.RuntimeService/ListContainers
Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.096113728Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b70f0a03-a9ca-4fdb-9c17-b98943476173 name=/runtime.v1.RuntimeService/Version
Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.096275067Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b70f0a03-a9ca-4fdb-9c17-b98943476173 name=/runtime.v1.RuntimeService/Version
Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.098096879Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50857181-ee4e-45b8-af97-1a5788290033 name=/runtime.v1.ImageService/ImageFsInfo
Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.099455013Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765999588099423467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:551113,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50857181-ee4e-45b8-af97-1a5788290033 name=/runtime.v1.ImageService/ImageFsInfo
Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.100380434Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5aa122d4-0279-49d2-8319-3abdf8ccc97b name=/runtime.v1.RuntimeService/ListContainers
Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.100461463Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5aa122d4-0279-49d2-8319-3abdf8ccc97b name=/runtime.v1.RuntimeService/ListContainers
Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.100822118Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:509c2e34906e3c016d03d51db722a340d140c0ed93e7fe3c711e9850b6570161,PodSandboxId:ab87d79e904d52cec8a616a57e0a377a4236bcc9340e467029894f8b9bb3a395,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765999445844268114,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6dccff02-c09a-4293-83a1-fd22a7c40b8c,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81ab5371b23fe766fc6be499ce17f2093c0f26ec5dae9f5758074ff01194c13b,PodSandboxId:0b7413f5092011d719adf5bd50f250a94a00c1c099a8e42118676aa95c1933e2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765999400971257972,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2be54a14-f7e4-4cce-a350-4f3c9438f053,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83bfed642b76ad03ba6be908e3f00f6e22893595b6905d9a972bcc02ec8db95c,PodSandboxId:4230ae39e001d85cb49e9c9db1deab46baa32a80d9c9b9ea791c3042fefd07e3,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765999382449717188,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-d7b4h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cdae934d-e441-44d9-8be3-38eda9dbad52,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f1ae3617bb443308f1cccdabefc2860f9716c1b01e9f7982834af654e5f87f1,PodSandboxId:9ae07f54f02cef14156432bdf3b38be60352efae4fa7d61f30ea4f078d6b961e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765999382401366680,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-2lds5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5a310601-c39b-42c0-a572-5471fbb24856,},Annotations:map[string]string{io.kubernetes.container.hash: 6f360
61b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cbfa46bd7651d15668ba7d18b6803cfff873f769b7a87dde5dccf615ecb8645e,PodSandboxId:5cb502d1fe0a1fb64e06f3134da1eb3b436887888664fee82b722f2df774fb3e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765999367952728755,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fg4xw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0fe75cec-73ac-48c0-81b3-fc95913a3fbd,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb1564c0fd8234f5ffc6fac2d4f1840e80da9bb871b6d578d43e86aa34bbe86,PodSandboxId:58e9f5510ce02ca6ff09f198c69741f18c5b9ee30ff8d0d5f0937c8b9d654667,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765999337624421140,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 665e2f71-8383-415a-89ea-cb281553dc9e,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842e06a810a24be894b68fbaa693a58ec50f0780b6754c852d8edbb419ae904e,PodSandboxId:c5947a163b040dfa0a7db2ee1530e171bcc0f8adf6afa8ad14f7a3247c4ff2e0,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38
35498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765999312566436038,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-z6w8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dbe0a3c-a1f6-46e6-beac-d8931e039819,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e17df536b7e48e5734a2ace40c0cfdca4505d136dadcb49d48a341c5ad44be2f,PodSandboxId:1dc437030c33e1f1b1c3f7446b75eb228f28ea94d1356064d5dd5f9cf7ae961c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765999311590572973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e51b534c-7297-4901-a6e7-63d89d9275dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08c9eb9a61ed32f25d7494c83d66ee26c80547bdfab107bdfafd06c71a2ce806,PodSandboxId:1bd82f4b4a856e37598b87b018b24c4eead12a4497a85917c3d6b85ac6a028a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a
9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765999301536234157,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xndpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cadb243f-ae46-400c-8188-a780a9a4974f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f18a26473585ecbde03c3eae1a070a8594be51992faebfb78ac9a623c2fd6e6c,PodSandboxId:609664b8e6016381ee97b9d8602bb0a177dd801ecc704bd7130ad1e285a236dc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765999300183927893,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmm7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcd502e-bfdd-41d4-911e-b8cb873ebb8c,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9f0548c6961aa8c7b3e0b8fb9bdae38d6af780451f3b6c84a7aedb37b1535f3,PodSandboxId:7d557bd8b150e0672357cad152f999aa5e279782d859fed266a443cd72e9a535,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765999287250536093,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-886556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 883b266f381f344ce15f08d1cdc57113,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e9006ec843a3c6ebc7d5a43db3e8e6d3798a906f5eae750ae071dcedce2d68,PodSandboxId:1c3372d0e8f698bbe6acbf4a19f230c2a81aebb86c68b93f563a057aaeb1fd45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765999287279063135,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-886556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1f8a4b64afdd22b1a13b05efdc91f50,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe
-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e9c28401ad79fd2540b52da660361d5bb63b6cdeeb79bbf826a753949bd7b5,PodSandboxId:354185a9c4dc542ecb18d84642e4dca83747cbba64ac2bf8693e84ccc579b684,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765999287231033799,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-886556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a77be74558da47219e6b04daea8f969,},Annotations:
map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:638eb74bc3cef930c7aea686a3049dad59b532c6928943988a51c6a42a17fd62,PodSandboxId:b561eebc576741724fb933d2adbb05606abb086448e25dc0a4c21240c6eda634,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765999287203307145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-886556,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 8c2aa17c88f5bbf01e49cd999fb78dc2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5aa122d4-0279-49d2-8319-3abdf8ccc97b name=/runtime.v1.RuntimeService/ListContainers
Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.137257787Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=53abae11-fb5b-429d-a38f-8d3f4b79f1cb name=/runtime.v1.RuntimeService/Version
Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.137371299Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=53abae11-fb5b-429d-a38f-8d3f4b79f1cb name=/runtime.v1.RuntimeService/Version
Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.139283213Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=458ea727-dcc0-46ae-9172-cafad58ff08a name=/runtime.v1.ImageService/ImageFsInfo
Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.141237028Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765999588141152989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:551113,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=458ea727-dcc0-46ae-9172-cafad58ff08a name=/runtime.v1.ImageService/ImageFsInfo
Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.142886959Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c7d7bcf-c6c4-490b-a2b5-bef84a5a207b name=/runtime.v1.RuntimeService/ListContainers
Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.143075069Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c7d7bcf-c6c4-490b-a2b5-bef84a5a207b name=/runtime.v1.RuntimeService/ListContainers
Dec 17 19:26:28 addons-886556 crio[809]: time="2025-12-17 19:26:28.143588097Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:509c2e34906e3c016d03d51db722a340d140c0ed93e7fe3c711e9850b6570161,PodSandboxId:ab87d79e904d52cec8a616a57e0a377a4236bcc9340e467029894f8b9bb3a395,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765999445844268114,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6dccff02-c09a-4293-83a1-fd22a7c40b8c,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81ab5371b23fe766fc6be499ce17f2093c0f26ec5dae9f5758074ff01194c13b,PodSandboxId:0b7413f5092011d719adf5bd50f250a94a00c1c099a8e42118676aa95c1933e2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765999400971257972,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2be54a14-f7e4-4cce-a350-4f3c9438f053,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83bfed642b76ad03ba6be908e3f00f6e22893595b6905d9a972bcc02ec8db95c,PodSandboxId:4230ae39e001d85cb49e9c9db1deab46baa32a80d9c9b9ea791c3042fefd07e3,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765999382449717188,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-d7b4h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cdae934d-e441-44d9-8be3-38eda9dbad52,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f1ae3617bb443308f1cccdabefc2860f9716c1b01e9f7982834af654e5f87f1,PodSandboxId:9ae07f54f02cef14156432bdf3b38be60352efae4fa7d61f30ea4f078d6b961e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765999382401366680,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-2lds5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5a310601-c39b-42c0-a572-5471fbb24856,},Annotations:map[string]string{io.kubernetes.container.hash: 6f360
61b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cbfa46bd7651d15668ba7d18b6803cfff873f769b7a87dde5dccf615ecb8645e,PodSandboxId:5cb502d1fe0a1fb64e06f3134da1eb3b436887888664fee82b722f2df774fb3e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765999367952728755,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fg4xw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0fe75cec-73ac-48c0-81b3-fc95913a3fbd,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb1564c0fd8234f5ffc6fac2d4f1840e80da9bb871b6d578d43e86aa34bbe86,PodSandboxId:58e9f5510ce02ca6ff09f198c69741f18c5b9ee30ff8d0d5f0937c8b9d654667,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765999337624421140,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 665e2f71-8383-415a-89ea-cb281553dc9e,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842e06a810a24be894b68fbaa693a58ec50f0780b6754c852d8edbb419ae904e,PodSandboxId:c5947a163b040dfa0a7db2ee1530e171bcc0f8adf6afa8ad14f7a3247c4ff2e0,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38
35498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765999312566436038,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-z6w8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dbe0a3c-a1f6-46e6-beac-d8931e039819,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e17df536b7e48e5734a2ace40c0cfdca4505d136dadcb49d48a341c5ad44be2f,PodSandboxId:1dc437030c33e1f1b1c3f7446b75eb228f28ea94d1356064d5dd5f9cf7ae961c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765999311590572973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e51b534c-7297-4901-a6e7-63d89d9275dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08c9eb9a61ed32f25d7494c83d66ee26c80547bdfab107bdfafd06c71a2ce806,PodSandboxId:1bd82f4b4a856e37598b87b018b24c4eead12a4497a85917c3d6b85ac6a028a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a
9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765999301536234157,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xndpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cadb243f-ae46-400c-8188-a780a9a4974f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f18a26473585ecbde03c3eae1a070a8594be51992faebfb78ac9a623c2fd6e6c,PodSandboxId:609664b8e6016381ee97b9d8602bb0a177dd801ecc704bd7130ad1e285a236dc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765999300183927893,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmm7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcd502e-bfdd-41d4-911e-b8cb873ebb8c,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9f0548c6961aa8c7b3e0b8fb9bdae38d6af780451f3b6c84a7aedb37b1535f3,PodSandboxId:7d557bd8b150e0672357cad152f999aa5e279782d859fed266a443cd72e9a535,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765999287250536093,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-886556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 883b266f381f344ce15f08d1cdc57113,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartC
ount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e9006ec843a3c6ebc7d5a43db3e8e6d3798a906f5eae750ae071dcedce2d68,PodSandboxId:1c3372d0e8f698bbe6acbf4a19f230c2a81aebb86c68b93f563a057aaeb1fd45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765999287279063135,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-886556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1f8a4b64afdd22b1a13b05efdc91f50,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe
-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e9c28401ad79fd2540b52da660361d5bb63b6cdeeb79bbf826a753949bd7b5,PodSandboxId:354185a9c4dc542ecb18d84642e4dca83747cbba64ac2bf8693e84ccc579b684,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765999287231033799,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-886556,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a77be74558da47219e6b04daea8f969,},Annotations:
map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:638eb74bc3cef930c7aea686a3049dad59b532c6928943988a51c6a42a17fd62,PodSandboxId:b561eebc576741724fb933d2adbb05606abb086448e25dc0a4c21240c6eda634,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765999287203307145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-886556,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 8c2aa17c88f5bbf01e49cd999fb78dc2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3c7d7bcf-c6c4-490b-a2b5-bef84a5a207b name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
509c2e34906e3 public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff 2 minutes ago Running nginx 0 ab87d79e904d5 nginx default
81ab5371b23fe gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 3 minutes ago Running busybox 0 0b7413f509201 busybox default
83bfed642b76a a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e 3 minutes ago Exited patch 2 4230ae39e001d ingress-nginx-admission-patch-d7b4h ingress-nginx
2f1ae3617bb44 registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad 3 minutes ago Running controller 0 9ae07f54f02ce ingress-nginx-controller-85d4c799dd-2lds5 ingress-nginx
cbfa46bd7651d registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285 3 minutes ago Exited create 0 5cb502d1fe0a1 ingress-nginx-admission-create-fg4xw ingress-nginx
0bb1564c0fd82 docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 4 minutes ago Running minikube-ingress-dns 0 58e9f5510ce02 kube-ingress-dns-minikube kube-system
842e06a810a24 docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 c5947a163b040 amd-gpu-device-plugin-z6w8r kube-system
e17df536b7e48 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 1dc437030c33e storage-provisioner kube-system
08c9eb9a61ed3 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969 4 minutes ago Running coredns 0 1bd82f4b4a856 coredns-66bc5c9577-xndpj kube-system
f18a26473585e 36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691 4 minutes ago Running kube-proxy 0 609664b8e6016 kube-proxy-tmm7b kube-system
82e9006ec843a aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78 5 minutes ago Running kube-scheduler 0 1c3372d0e8f69 kube-scheduler-addons-886556 kube-system
f9f0548c6961a 5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942 5 minutes ago Running kube-controller-manager 0 7d557bd8b150e kube-controller-manager-addons-886556 kube-system
c5e9c28401ad7 aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c 5 minutes ago Running kube-apiserver 0 354185a9c4dc5 kube-apiserver-addons-886556 kube-system
638eb74bc3cef a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1 5 minutes ago Running etcd 0 b561eebc57674 etcd-addons-886556 kube-system
==> coredns [08c9eb9a61ed32f25d7494c83d66ee26c80547bdfab107bdfafd06c71a2ce806] <==
[INFO] 10.244.0.8:48776 - 12420 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.002198445s
[INFO] 10.244.0.8:48776 - 60972 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000475785s
[INFO] 10.244.0.8:48776 - 18661 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000155413s
[INFO] 10.244.0.8:48776 - 1812 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000175434s
[INFO] 10.244.0.8:48776 - 9007 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000119102s
[INFO] 10.244.0.8:48776 - 9524 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000205124s
[INFO] 10.244.0.8:48776 - 55124 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.002205134s
[INFO] 10.244.0.8:51736 - 49397 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000167975s
[INFO] 10.244.0.8:51736 - 49687 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000131334s
[INFO] 10.244.0.8:39617 - 15192 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000092479s
[INFO] 10.244.0.8:39617 - 15414 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000152642s
[INFO] 10.244.0.8:37064 - 42364 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000064261s
[INFO] 10.244.0.8:37064 - 42582 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000136959s
[INFO] 10.244.0.8:59232 - 14994 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000350558s
[INFO] 10.244.0.8:59232 - 15198 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00009259s
[INFO] 10.244.0.23:57812 - 17927 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001093434s
[INFO] 10.244.0.23:46138 - 28032 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001655064s
[INFO] 10.244.0.23:42761 - 56883 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139257s
[INFO] 10.244.0.23:57580 - 52478 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000225017s
[INFO] 10.244.0.23:44076 - 34964 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127418s
[INFO] 10.244.0.23:59976 - 63156 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000088725s
[INFO] 10.244.0.23:45897 - 19764 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.005364575s
[INFO] 10.244.0.23:56748 - 25660 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.007006785s
[INFO] 10.244.0.28:52934 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001485295s
[INFO] 10.244.0.28:60918 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000154725s
==> describe nodes <==
Name: addons-886556
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-886556
kubernetes.io/os=linux
minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924
minikube.k8s.io/name=addons-886556
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_17T19_21_35_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-886556
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 17 Dec 2025 19:21:30 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-886556
AcquireTime: <unset>
RenewTime: Wed, 17 Dec 2025 19:26:20 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 17 Dec 2025 19:24:37 +0000 Wed, 17 Dec 2025 19:21:27 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 17 Dec 2025 19:24:37 +0000 Wed, 17 Dec 2025 19:21:27 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 17 Dec 2025 19:24:37 +0000 Wed, 17 Dec 2025 19:21:27 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 17 Dec 2025 19:24:37 +0000 Wed, 17 Dec 2025 19:21:35 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.92
Hostname: addons-886556
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001796Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001796Ki
pods: 110
System Info:
Machine ID: 9d7dd346d2b74fec936f08e6e7425367
System UUID: 9d7dd346-d2b7-4fec-936f-08e6e7425367
Boot ID: b6c6afb0-3cd2-4306-8040-20d6fd16da45
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.34.3
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (13 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m12s
default hello-world-app-5d498dc89-55zvp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m31s
ingress-nginx ingress-nginx-controller-85d4c799dd-2lds5 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m39s
kube-system amd-gpu-device-plugin-z6w8r 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m45s
kube-system coredns-66bc5c9577-xndpj 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m49s
kube-system etcd-addons-886556 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m54s
kube-system kube-apiserver-addons-886556 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m54s
kube-system kube-controller-manager-addons-886556 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m55s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m42s
kube-system kube-proxy-tmm7b 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m49s
kube-system kube-scheduler-addons-886556 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m54s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m41s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m47s kube-proxy
Normal NodeHasSufficientMemory 5m2s (x8 over 5m2s) kubelet Node addons-886556 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m2s (x8 over 5m2s) kubelet Node addons-886556 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m2s (x7 over 5m2s) kubelet Node addons-886556 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 5m2s kubelet Updated Node Allocatable limit across pods
Normal Starting 4m54s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m54s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m54s kubelet Node addons-886556 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m54s kubelet Node addons-886556 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m54s kubelet Node addons-886556 status is now: NodeHasSufficientPID
Normal NodeReady 4m53s kubelet Node addons-886556 status is now: NodeReady
Normal RegisteredNode 4m50s node-controller Node addons-886556 event: Registered Node addons-886556 in Controller
==> dmesg <==
[ +0.446890] kauditd_printk_skb: 284 callbacks suppressed
[ +2.107774] kauditd_printk_skb: 428 callbacks suppressed
[Dec17 19:22] kauditd_printk_skb: 53 callbacks suppressed
[ +10.081555] kauditd_printk_skb: 11 callbacks suppressed
[ +9.046903] kauditd_printk_skb: 26 callbacks suppressed
[ +5.390511] kauditd_printk_skb: 26 callbacks suppressed
[ +5.080651] kauditd_printk_skb: 17 callbacks suppressed
[ +6.066452] kauditd_printk_skb: 131 callbacks suppressed
[ +2.514288] kauditd_printk_skb: 77 callbacks suppressed
[ +1.692741] kauditd_printk_skb: 124 callbacks suppressed
[Dec17 19:23] kauditd_printk_skb: 46 callbacks suppressed
[ +3.935200] kauditd_printk_skb: 68 callbacks suppressed
[ +6.084297] kauditd_printk_skb: 56 callbacks suppressed
[ +4.518929] kauditd_printk_skb: 38 callbacks suppressed
[ +10.595753] kauditd_printk_skb: 5 callbacks suppressed
[ +0.000066] kauditd_printk_skb: 22 callbacks suppressed
[ +4.389164] kauditd_printk_skb: 89 callbacks suppressed
[ +0.915229] kauditd_printk_skb: 81 callbacks suppressed
[ +1.233366] kauditd_printk_skb: 85 callbacks suppressed
[ +0.062258] kauditd_printk_skb: 194 callbacks suppressed
[Dec17 19:24] kauditd_printk_skb: 60 callbacks suppressed
[ +3.982783] kauditd_printk_skb: 88 callbacks suppressed
[ +9.766294] kauditd_printk_skb: 42 callbacks suppressed
[ +7.886932] kauditd_printk_skb: 61 callbacks suppressed
[Dec17 19:26] kauditd_printk_skb: 127 callbacks suppressed
==> etcd [638eb74bc3cef930c7aea686a3049dad59b532c6928943988a51c6a42a17fd62] <==
{"level":"info","ts":"2025-12-17T19:22:53.561485Z","caller":"traceutil/trace.go:172","msg":"trace[1121628281] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1098; }","duration":"134.116774ms","start":"2025-12-17T19:22:53.427362Z","end":"2025-12-17T19:22:53.561479Z","steps":["trace[1121628281] 'range keys from in-memory index tree' (duration: 134.024496ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T19:22:59.014982Z","caller":"traceutil/trace.go:172","msg":"trace[1399873484] linearizableReadLoop","detail":"{readStateIndex:1168; appliedIndex:1168; }","duration":"179.817693ms","start":"2025-12-17T19:22:58.835146Z","end":"2025-12-17T19:22:59.014964Z","steps":["trace[1399873484] 'read index received' (duration: 179.812165ms)","trace[1399873484] 'applied index is now lower than readState.Index' (duration: 4.703µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-17T19:22:59.017165Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.003961ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-17T19:22:59.019712Z","caller":"traceutil/trace.go:172","msg":"trace[476973923] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1135; }","duration":"184.559038ms","start":"2025-12-17T19:22:58.835142Z","end":"2025-12-17T19:22:59.019701Z","steps":["trace[476973923] 'agreement among raft nodes before linearized reading' (duration: 180.019514ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T19:22:59.017798Z","caller":"traceutil/trace.go:172","msg":"trace[1135835113] transaction","detail":"{read_only:false; response_revision:1136; number_of_response:1; }","duration":"270.861499ms","start":"2025-12-17T19:22:58.746925Z","end":"2025-12-17T19:22:59.017786Z","steps":["trace[1135835113] 'process raft request' (duration: 268.231019ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T19:23:02.154531Z","caller":"traceutil/trace.go:172","msg":"trace[384274761] linearizableReadLoop","detail":"{readStateIndex:1172; appliedIndex:1172; }","duration":"225.927536ms","start":"2025-12-17T19:23:01.928575Z","end":"2025-12-17T19:23:02.154503Z","steps":["trace[384274761] 'read index received' (duration: 225.922196ms)","trace[384274761] 'applied index is now lower than readState.Index' (duration: 4.751µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-17T19:23:02.154622Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"226.034001ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-17T19:23:02.154639Z","caller":"traceutil/trace.go:172","msg":"trace[1825000597] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1139; }","duration":"226.067346ms","start":"2025-12-17T19:23:01.928566Z","end":"2025-12-17T19:23:02.154634Z","steps":["trace[1825000597] 'agreement among raft nodes before linearized reading' (duration: 226.005378ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T19:23:02.155153Z","caller":"traceutil/trace.go:172","msg":"trace[1256016360] transaction","detail":"{read_only:false; response_revision:1140; number_of_response:1; }","duration":"250.04643ms","start":"2025-12-17T19:23:01.905091Z","end":"2025-12-17T19:23:02.155138Z","steps":["trace[1256016360] 'process raft request' (duration: 249.956849ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T19:23:18.444695Z","caller":"traceutil/trace.go:172","msg":"trace[771687873] linearizableReadLoop","detail":"{readStateIndex:1281; appliedIndex:1281; }","duration":"154.522948ms","start":"2025-12-17T19:23:18.290097Z","end":"2025-12-17T19:23:18.444620Z","steps":["trace[771687873] 'read index received' (duration: 154.486162ms)","trace[771687873] 'applied index is now lower than readState.Index' (duration: 35.914µs)"],"step_count":2}
{"level":"info","ts":"2025-12-17T19:23:18.444835Z","caller":"traceutil/trace.go:172","msg":"trace[1495691571] transaction","detail":"{read_only:false; response_revision:1246; number_of_response:1; }","duration":"232.589009ms","start":"2025-12-17T19:23:18.212234Z","end":"2025-12-17T19:23:18.444823Z","steps":["trace[1495691571] 'process raft request' (duration: 232.499756ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-17T19:23:18.444924Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.805607ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
{"level":"info","ts":"2025-12-17T19:23:18.444950Z","caller":"traceutil/trace.go:172","msg":"trace[642985539] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1246; }","duration":"154.85149ms","start":"2025-12-17T19:23:18.290093Z","end":"2025-12-17T19:23:18.444944Z","steps":["trace[642985539] 'agreement among raft nodes before linearized reading' (duration: 154.734082ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-17T19:23:18.445246Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"148.98814ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-17T19:23:18.445293Z","caller":"traceutil/trace.go:172","msg":"trace[254387305] range","detail":"{range_begin:/registry/prioritylevelconfigurations; range_end:; response_count:0; response_revision:1246; }","duration":"149.039601ms","start":"2025-12-17T19:23:18.296247Z","end":"2025-12-17T19:23:18.445286Z","steps":["trace[254387305] 'agreement among raft nodes before linearized reading' (duration: 148.971029ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T19:23:46.803527Z","caller":"traceutil/trace.go:172","msg":"trace[1904232902] transaction","detail":"{read_only:false; response_revision:1413; number_of_response:1; }","duration":"155.554248ms","start":"2025-12-17T19:23:46.647957Z","end":"2025-12-17T19:23:46.803511Z","steps":["trace[1904232902] 'process raft request' (duration: 155.420257ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T19:23:50.750271Z","caller":"traceutil/trace.go:172","msg":"trace[414784047] linearizableReadLoop","detail":"{readStateIndex:1480; appliedIndex:1480; }","duration":"250.387736ms","start":"2025-12-17T19:23:50.499864Z","end":"2025-12-17T19:23:50.750252Z","steps":["trace[414784047] 'read index received' (duration: 250.381197ms)","trace[414784047] 'applied index is now lower than readState.Index' (duration: 5.627µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-17T19:23:50.750398Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"250.516345ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-17T19:23:50.750417Z","caller":"traceutil/trace.go:172","msg":"trace[358167303] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1434; }","duration":"250.550649ms","start":"2025-12-17T19:23:50.499860Z","end":"2025-12-17T19:23:50.750411Z","steps":["trace[358167303] 'agreement among raft nodes before linearized reading' (duration: 250.486968ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-17T19:23:50.750572Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"249.450773ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-17T19:23:50.750611Z","caller":"traceutil/trace.go:172","msg":"trace[1085933812] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1435; }","duration":"249.496588ms","start":"2025-12-17T19:23:50.501107Z","end":"2025-12-17T19:23:50.750603Z","steps":["trace[1085933812] 'agreement among raft nodes before linearized reading' (duration: 249.433568ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T19:23:50.751261Z","caller":"traceutil/trace.go:172","msg":"trace[1160846200] transaction","detail":"{read_only:false; response_revision:1435; number_of_response:1; }","duration":"327.763968ms","start":"2025-12-17T19:23:50.423474Z","end":"2025-12-17T19:23:50.751238Z","steps":["trace[1160846200] 'process raft request' (duration: 326.970805ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-17T19:23:50.751519Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T19:23:50.423452Z","time spent":"327.968197ms","remote":"127.0.0.1:41306","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1412 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
{"level":"warn","ts":"2025-12-17T19:23:50.756562Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.707287ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" limit:1 ","response":"range_response_count:1 size:2270"}
{"level":"info","ts":"2025-12-17T19:23:50.756597Z","caller":"traceutil/trace.go:172","msg":"trace[1962334978] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io; range_end:; response_count:1; response_revision:1435; }","duration":"111.747082ms","start":"2025-12-17T19:23:50.644841Z","end":"2025-12-17T19:23:50.756588Z","steps":["trace[1962334978] 'agreement among raft nodes before linearized reading' (duration: 107.056957ms)"],"step_count":1}
==> kernel <==
19:26:28 up 5 min, 0 users, load average: 0.51, 1.10, 0.61
Linux addons-886556 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [c5e9c28401ad79fd2540b52da660361d5bb63b6cdeeb79bbf826a753949bd7b5] <==
E1217 19:22:32.083813 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.247.210:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.247.210:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.247.210:443: connect: connection refused" logger="UnhandledError"
E1217 19:22:32.105008 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.247.210:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.247.210:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.247.210:443: connect: connection refused" logger="UnhandledError"
I1217 19:22:32.240207 1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E1217 19:23:28.334340 1 conn.go:339] Error on socket receive: read tcp 192.168.39.92:8443->192.168.39.1:45636: use of closed network connection
E1217 19:23:28.555124 1 conn.go:339] Error on socket receive: read tcp 192.168.39.92:8443->192.168.39.1:45654: use of closed network connection
I1217 19:23:38.094360 1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.3.250"}
I1217 19:23:57.754147 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
I1217 19:23:58.000830 1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.25.57"}
E1217 19:24:10.950297 1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
I1217 19:24:14.787417 1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I1217 19:24:33.098033 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I1217 19:24:38.967635 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1217 19:24:38.969917 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1217 19:24:39.000829 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1217 19:24:39.000925 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1217 19:24:39.012588 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1217 19:24:39.012778 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1217 19:24:39.056804 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1217 19:24:39.056858 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1217 19:24:39.212056 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1217 19:24:39.212180 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W1217 19:24:40.000923 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W1217 19:24:40.212395 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W1217 19:24:40.218257 1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
I1217 19:26:26.884114 1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.26.190"}
==> kube-controller-manager [f9f0548c6961aa8c7b3e0b8fb9bdae38d6af780451f3b6c84a7aedb37b1535f3] <==
E1217 19:24:49.152552 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 19:24:49.732270 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 19:24:49.733358 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 19:24:56.632903 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 19:24:56.634197 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 19:24:58.686121 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 19:24:58.687551 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 19:24:59.821019 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 19:24:59.822124 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
I1217 19:25:09.062631 1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
I1217 19:25:09.062740 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1217 19:25:09.175548 1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
I1217 19:25:09.175616 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
E1217 19:25:19.562808 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 19:25:19.563974 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 19:25:19.724266 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 19:25:19.725476 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 19:25:20.239756 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 19:25:20.240801 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 19:25:52.527854 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 19:25:52.529036 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 19:25:56.543386 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 19:25:56.544483 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 19:26:07.936474 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 19:26:07.938044 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
==> kube-proxy [f18a26473585ecbde03c3eae1a070a8594be51992faebfb78ac9a623c2fd6e6c] <==
I1217 19:21:40.627007 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1217 19:21:40.728238 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1217 19:21:40.728290 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.92"]
E1217 19:21:40.728396 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1217 19:21:40.845068 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1217 19:21:40.845176 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1217 19:21:40.845220 1 server_linux.go:132] "Using iptables Proxier"
I1217 19:21:40.884366 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1217 19:21:40.888010 1 server.go:527] "Version info" version="v1.34.3"
I1217 19:21:40.888238 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1217 19:21:40.905736 1 config.go:200] "Starting service config controller"
I1217 19:21:40.905755 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1217 19:21:40.905778 1 config.go:106] "Starting endpoint slice config controller"
I1217 19:21:40.905782 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1217 19:21:40.905792 1 config.go:403] "Starting serviceCIDR config controller"
I1217 19:21:40.905796 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1217 19:21:40.906586 1 config.go:309] "Starting node config controller"
I1217 19:21:40.906594 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1217 19:21:40.906599 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1217 19:21:41.006109 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1217 19:21:41.006223 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1217 19:21:41.006237 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-scheduler [82e9006ec843a3c6ebc7d5a43db3e8e6d3798a906f5eae750ae071dcedce2d68] <==
E1217 19:21:30.858295 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1217 19:21:30.858367 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1217 19:21:30.858495 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1217 19:21:30.858554 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1217 19:21:30.858616 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1217 19:21:31.696323 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1217 19:21:31.705363 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1217 19:21:31.705442 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1217 19:21:31.793970 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1217 19:21:31.793991 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1217 19:21:31.814385 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1217 19:21:31.815489 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1217 19:21:31.847535 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1217 19:21:31.899782 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1217 19:21:31.903166 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1217 19:21:31.962092 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1217 19:21:32.028925 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1217 19:21:32.052298 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1217 19:21:32.078882 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1217 19:21:32.081088 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1217 19:21:32.084236 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1217 19:21:32.112692 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1217 19:21:32.165808 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1217 19:21:32.175747 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
I1217 19:21:34.050463 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Dec 17 19:24:44 addons-886556 kubelet[1509]: E1217 19:24:44.638449 1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765999484638119309 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 19:24:44 addons-886556 kubelet[1509]: E1217 19:24:44.638469 1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765999484638119309 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 19:24:54 addons-886556 kubelet[1509]: E1217 19:24:54.640381 1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765999494640190049 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 19:24:54 addons-886556 kubelet[1509]: E1217 19:24:54.640403 1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765999494640190049 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 19:25:04 addons-886556 kubelet[1509]: E1217 19:25:04.643485 1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765999504643145179 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 19:25:04 addons-886556 kubelet[1509]: E1217 19:25:04.643529 1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765999504643145179 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 19:25:14 addons-886556 kubelet[1509]: E1217 19:25:14.647232 1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765999514646404141 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 19:25:14 addons-886556 kubelet[1509]: E1217 19:25:14.647511 1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765999514646404141 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 19:25:24 addons-886556 kubelet[1509]: E1217 19:25:24.651182 1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765999524649444236 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 19:25:24 addons-886556 kubelet[1509]: E1217 19:25:24.651232 1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765999524649444236 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 19:25:34 addons-886556 kubelet[1509]: E1217 19:25:34.654593 1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765999534654290412 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 19:25:34 addons-886556 kubelet[1509]: E1217 19:25:34.654638 1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765999534654290412 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 19:25:38 addons-886556 kubelet[1509]: I1217 19:25:38.389962 1509 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Dec 17 19:25:44 addons-886556 kubelet[1509]: E1217 19:25:44.657599 1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765999544657294879 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 19:25:44 addons-886556 kubelet[1509]: E1217 19:25:44.657621 1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765999544657294879 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 19:25:48 addons-886556 kubelet[1509]: I1217 19:25:48.390372 1509 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-z6w8r" secret="" err="secret \"gcp-auth\" not found"
Dec 17 19:25:54 addons-886556 kubelet[1509]: E1217 19:25:54.659750 1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765999554659405880 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 19:25:54 addons-886556 kubelet[1509]: E1217 19:25:54.659795 1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765999554659405880 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 19:26:04 addons-886556 kubelet[1509]: E1217 19:26:04.662931 1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765999564662377859 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 19:26:04 addons-886556 kubelet[1509]: E1217 19:26:04.662956 1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765999564662377859 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 19:26:14 addons-886556 kubelet[1509]: E1217 19:26:14.667908 1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765999574665896368 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 19:26:14 addons-886556 kubelet[1509]: E1217 19:26:14.668763 1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765999574665896368 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 19:26:24 addons-886556 kubelet[1509]: E1217 19:26:24.671834 1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765999584671075779 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 19:26:24 addons-886556 kubelet[1509]: E1217 19:26:24.671881 1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765999584671075779 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 19:26:26 addons-886556 kubelet[1509]: I1217 19:26:26.958045 1509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc8js\" (UniqueName: \"kubernetes.io/projected/9f819028-eb2e-4a6b-b5a0-aec761ac06d4-kube-api-access-wc8js\") pod \"hello-world-app-5d498dc89-55zvp\" (UID: \"9f819028-eb2e-4a6b-b5a0-aec761ac06d4\") " pod="default/hello-world-app-5d498dc89-55zvp"
==> storage-provisioner [e17df536b7e48e5734a2ace40c0cfdca4505d136dadcb49d48a341c5ad44be2f] <==
W1217 19:26:03.876639 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 19:26:05.880882 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 19:26:05.890394 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 19:26:07.894756 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 19:26:07.900611 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 19:26:09.904915 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 19:26:09.914423 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 19:26:11.918729 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 19:26:11.925081 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 19:26:13.929939 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 19:26:13.939202 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 19:26:15.943499 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 19:26:15.949593 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 19:26:17.954044 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 19:26:17.962078 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 19:26:19.966377 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 19:26:19.972374 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 19:26:21.976745 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 19:26:21.985975 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 19:26:23.989977 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 19:26:23.995862 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 19:26:26.000222 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 19:26:26.006838 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 19:26:28.012747 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 19:26:28.021239 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-886556 -n addons-886556
helpers_test.go:270: (dbg) Run: kubectl --context addons-886556 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-55zvp ingress-nginx-admission-create-fg4xw ingress-nginx-admission-patch-d7b4h
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run: kubectl --context addons-886556 describe pod hello-world-app-5d498dc89-55zvp ingress-nginx-admission-create-fg4xw ingress-nginx-admission-patch-d7b4h
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-886556 describe pod hello-world-app-5d498dc89-55zvp ingress-nginx-admission-create-fg4xw ingress-nginx-admission-patch-d7b4h: exit status 1 (74.638137ms)
-- stdout --
Name: hello-world-app-5d498dc89-55zvp
Namespace: default
Priority: 0
Service Account: default
Node: addons-886556/192.168.39.92
Start Time: Wed, 17 Dec 2025 19:26:26 +0000
Labels: app=hello-world-app
pod-template-hash=5d498dc89
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-world-app-5d498dc89
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wc8js (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-wc8js:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3s default-scheduler Successfully assigned default/hello-world-app-5d498dc89-55zvp to addons-886556
Normal Pulling 2s kubelet spec.containers{hello-world-app}: Pulling image "docker.io/kicbase/echo-server:1.0"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-fg4xw" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-d7b4h" not found
** /stderr **
helpers_test.go:288: kubectl --context addons-886556 describe pod hello-world-app-5d498dc89-55zvp ingress-nginx-admission-create-fg4xw ingress-nginx-admission-patch-d7b4h: exit status 1
addons_test.go:1055: (dbg) Run: out/minikube-linux-amd64 -p addons-886556 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-886556 addons disable ingress-dns --alsologtostderr -v=1: (1.579401963s)
addons_test.go:1055: (dbg) Run: out/minikube-linux-amd64 -p addons-886556 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-886556 addons disable ingress --alsologtostderr -v=1: (7.834819554s)
--- FAIL: TestAddons/parallel/Ingress (161.24s)