=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run: kubectl --context addons-462156 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run: kubectl --context addons-462156 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run: kubectl --context addons-462156 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [38540fef-532f-483f-9d53-b8ff5b9bcf5b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [38540fef-532f-483f-9d53-b8ff5b9bcf5b] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004641347s
I1210 22:29:30.017333 9065 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run: out/minikube-linux-amd64 -p addons-462156 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-462156 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.475841019s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run: kubectl --context addons-462156 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run: out/minikube-linux-amd64 -p addons-462156 ip
addons_test.go:301: (dbg) Run: nslookup hello-john.test 192.168.39.89
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-462156 -n addons-462156
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-amd64 -p addons-462156 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-462156 logs -n 25: (1.171885373s)
helpers_test.go:261: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ delete │ -p download-only-809442 │ download-only-809442 │ jenkins │ v1.37.0 │ 10 Dec 25 22:26 UTC │ 10 Dec 25 22:26 UTC │
│ start │ --download-only -p binary-mirror-634983 --alsologtostderr --binary-mirror http://127.0.0.1:43689 --driver=kvm2 --container-runtime=crio │ binary-mirror-634983 │ jenkins │ v1.37.0 │ 10 Dec 25 22:26 UTC │ │
│ delete │ -p binary-mirror-634983 │ binary-mirror-634983 │ jenkins │ v1.37.0 │ 10 Dec 25 22:26 UTC │ 10 Dec 25 22:26 UTC │
│ addons │ disable dashboard -p addons-462156 │ addons-462156 │ jenkins │ v1.37.0 │ 10 Dec 25 22:26 UTC │ │
│ addons │ enable dashboard -p addons-462156 │ addons-462156 │ jenkins │ v1.37.0 │ 10 Dec 25 22:26 UTC │ │
│ start │ -p addons-462156 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-462156 │ jenkins │ v1.37.0 │ 10 Dec 25 22:26 UTC │ 10 Dec 25 22:28 UTC │
│ addons │ addons-462156 addons disable volcano --alsologtostderr -v=1 │ addons-462156 │ jenkins │ v1.37.0 │ 10 Dec 25 22:28 UTC │ 10 Dec 25 22:28 UTC │
│ addons │ addons-462156 addons disable gcp-auth --alsologtostderr -v=1 │ addons-462156 │ jenkins │ v1.37.0 │ 10 Dec 25 22:28 UTC │ 10 Dec 25 22:28 UTC │
│ addons │ enable headlamp -p addons-462156 --alsologtostderr -v=1 │ addons-462156 │ jenkins │ v1.37.0 │ 10 Dec 25 22:28 UTC │ 10 Dec 25 22:28 UTC │
│ addons │ addons-462156 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-462156 │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:29 UTC │
│ addons │ addons-462156 addons disable metrics-server --alsologtostderr -v=1 │ addons-462156 │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:29 UTC │
│ addons │ addons-462156 addons disable yakd --alsologtostderr -v=1 │ addons-462156 │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:29 UTC │
│ addons │ addons-462156 addons disable headlamp --alsologtostderr -v=1 │ addons-462156 │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:29 UTC │
│ ip │ addons-462156 ip │ addons-462156 │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:29 UTC │
│ addons │ addons-462156 addons disable registry --alsologtostderr -v=1 │ addons-462156 │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:29 UTC │
│ ssh │ addons-462156 ssh cat /opt/local-path-provisioner/pvc-b4447a5f-b7fa-4088-983a-5d4d2b4a48d3_default_test-pvc/file1 │ addons-462156 │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:29 UTC │
│ addons │ addons-462156 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-462156 │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:30 UTC │
│ addons │ addons-462156 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-462156 │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:29 UTC │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-462156 │ addons-462156 │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:29 UTC │
│ ssh │ addons-462156 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-462156 │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ │
│ addons │ addons-462156 addons disable registry-creds --alsologtostderr -v=1 │ addons-462156 │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:29 UTC │
│ addons │ addons-462156 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-462156 │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:29 UTC │
│ addons │ addons-462156 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-462156 │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:29 UTC │
│ addons │ addons-462156 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-462156 │ jenkins │ v1.37.0 │ 10 Dec 25 22:29 UTC │ 10 Dec 25 22:30 UTC │
│ ip │ addons-462156 ip │ addons-462156 │ jenkins │ v1.37.0 │ 10 Dec 25 22:31 UTC │ 10 Dec 25 22:31 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/10 22:26:32
Running on machine: ubuntu-20-agent-8
Binary: Built with gc go1.25.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1210 22:26:32.169557 9998 out.go:360] Setting OutFile to fd 1 ...
I1210 22:26:32.169644 9998 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:26:32.169651 9998 out.go:374] Setting ErrFile to fd 2...
I1210 22:26:32.169655 9998 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 22:26:32.169828 9998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-5125/.minikube/bin
I1210 22:26:32.170306 9998 out.go:368] Setting JSON to false
I1210 22:26:32.171074 9998 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":533,"bootTime":1765405059,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1210 22:26:32.171122 9998 start.go:143] virtualization: kvm guest
I1210 22:26:32.173038 9998 out.go:179] * [addons-462156] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1210 22:26:32.174335 9998 out.go:179] - MINIKUBE_LOCATION=22061
I1210 22:26:32.174327 9998 notify.go:221] Checking for updates...
I1210 22:26:32.176777 9998 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1210 22:26:32.177993 9998 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22061-5125/kubeconfig
I1210 22:26:32.179388 9998 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-5125/.minikube
I1210 22:26:32.180707 9998 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1210 22:26:32.182073 9998 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1210 22:26:32.183429 9998 driver.go:422] Setting default libvirt URI to qemu:///system
I1210 22:26:32.212895 9998 out.go:179] * Using the kvm2 driver based on user configuration
I1210 22:26:32.214276 9998 start.go:309] selected driver: kvm2
I1210 22:26:32.214290 9998 start.go:927] validating driver "kvm2" against <nil>
I1210 22:26:32.214308 9998 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1210 22:26:32.214945 9998 start_flags.go:342] no existing cluster config was found, will generate one from the flags
I1210 22:26:32.215149 9998 start_flags.go:1131] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1210 22:26:32.215184 9998 cni.go:84] Creating CNI manager for ""
I1210 22:26:32.215223 9998 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1210 22:26:32.215231 9998 start_flags.go:351] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1210 22:26:32.215271 9998 start.go:353] cluster config:
{Name:addons-462156 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-462156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1210 22:26:32.215369 9998 iso.go:125] acquiring lock: {Name:mk1091e707b59a200dfce77f9e85a41a0a31058c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1210 22:26:32.216915 9998 out.go:179] * Starting "addons-462156" primary control-plane node in "addons-462156" cluster
I1210 22:26:32.218022 9998 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1210 22:26:32.218045 9998 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-5125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
I1210 22:26:32.218056 9998 cache.go:65] Caching tarball of preloaded images
I1210 22:26:32.218122 9998 preload.go:238] Found /home/jenkins/minikube-integration/22061-5125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1210 22:26:32.218132 9998 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
I1210 22:26:32.218421 9998 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/config.json ...
I1210 22:26:32.218449 9998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/config.json: {Name:mka7649c59aae252a336cdc3b3bcfac74b8f5b1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 22:26:32.218565 9998 start.go:360] acquireMachinesLock for addons-462156: {Name:mkee27f251311e7c2b20a9d6393fa289a9410b32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1210 22:26:32.218604 9998 start.go:364] duration metric: took 28.357µs to acquireMachinesLock for "addons-462156"
I1210 22:26:32.218621 9998 start.go:93] Provisioning new machine with config: &{Name:addons-462156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:
22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-462156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1210 22:26:32.218669 9998 start.go:125] createHost starting for "" (driver="kvm2")
I1210 22:26:32.220079 9998 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1210 22:26:32.220209 9998 start.go:159] libmachine.API.Create for "addons-462156" (driver="kvm2")
I1210 22:26:32.220233 9998 client.go:173] LocalClient.Create starting
I1210 22:26:32.220298 9998 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22061-5125/.minikube/certs/ca.pem
I1210 22:26:32.250694 9998 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22061-5125/.minikube/certs/cert.pem
I1210 22:26:32.278720 9998 main.go:143] libmachine: creating domain...
I1210 22:26:32.278739 9998 main.go:143] libmachine: creating network...
I1210 22:26:32.280083 9998 main.go:143] libmachine: found existing default network
I1210 22:26:32.280392 9998 main.go:143] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1210 22:26:32.280981 9998 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b4d360}
I1210 22:26:32.281074 9998 main.go:143] libmachine: defining private network:
<network>
<name>mk-addons-462156</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1210 22:26:32.287137 9998 main.go:143] libmachine: creating private network mk-addons-462156 192.168.39.0/24...
I1210 22:26:32.350851 9998 main.go:143] libmachine: private network mk-addons-462156 192.168.39.0/24 created
I1210 22:26:32.351114 9998 main.go:143] libmachine: <network>
<name>mk-addons-462156</name>
<uuid>4e33da69-9275-4eca-b612-86f4ce6cac3e</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:56:9a:40'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1210 22:26:32.351141 9998 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156 ...
I1210 22:26:32.351165 9998 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22061-5125/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso
I1210 22:26:32.351180 9998 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22061-5125/.minikube
I1210 22:26:32.351257 9998 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22061-5125/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22061-5125/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso...
I1210 22:26:32.620660 9998 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa...
I1210 22:26:32.660147 9998 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/addons-462156.rawdisk...
I1210 22:26:32.660184 9998 main.go:143] libmachine: Writing magic tar header
I1210 22:26:32.660208 9998 main.go:143] libmachine: Writing SSH key tar header
I1210 22:26:32.660276 9998 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156 ...
I1210 22:26:32.660335 9998 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156
I1210 22:26:32.660379 9998 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156 (perms=drwx------)
I1210 22:26:32.660397 9998 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22061-5125/.minikube/machines
I1210 22:26:32.660406 9998 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22061-5125/.minikube/machines (perms=drwxr-xr-x)
I1210 22:26:32.660417 9998 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22061-5125/.minikube
I1210 22:26:32.660426 9998 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22061-5125/.minikube (perms=drwxr-xr-x)
I1210 22:26:32.660434 9998 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22061-5125
I1210 22:26:32.660473 9998 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22061-5125 (perms=drwxrwxr-x)
I1210 22:26:32.660483 9998 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1210 22:26:32.660493 9998 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1210 22:26:32.660500 9998 main.go:143] libmachine: checking permissions on dir: /home/jenkins
I1210 22:26:32.660507 9998 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1210 22:26:32.660516 9998 main.go:143] libmachine: checking permissions on dir: /home
I1210 22:26:32.660525 9998 main.go:143] libmachine: skipping /home - not owner
I1210 22:26:32.660530 9998 main.go:143] libmachine: defining domain...
I1210 22:26:32.661866 9998 main.go:143] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-462156</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/addons-462156.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-462156'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1210 22:26:32.669613 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:9c:8e:25 in network default
I1210 22:26:32.670162 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:32.670181 9998 main.go:143] libmachine: starting domain...
I1210 22:26:32.670186 9998 main.go:143] libmachine: ensuring networks are active...
I1210 22:26:32.670937 9998 main.go:143] libmachine: Ensuring network default is active
I1210 22:26:32.671307 9998 main.go:143] libmachine: Ensuring network mk-addons-462156 is active
I1210 22:26:32.672041 9998 main.go:143] libmachine: getting domain XML...
I1210 22:26:32.673101 9998 main.go:143] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-462156</name>
<uuid>04673162-af0d-46ce-874c-a95dda098d35</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/addons-462156.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:8c:7a:8f'/>
<source network='mk-addons-462156'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:9c:8e:25'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1210 22:26:33.972515 9998 main.go:143] libmachine: waiting for domain to start...
I1210 22:26:33.973933 9998 main.go:143] libmachine: domain is now running
I1210 22:26:33.973956 9998 main.go:143] libmachine: waiting for IP...
I1210 22:26:33.974688 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:33.975154 9998 main.go:143] libmachine: no network interface addresses found for domain addons-462156 (source=lease)
I1210 22:26:33.975171 9998 main.go:143] libmachine: trying to list again with source=arp
I1210 22:26:33.975447 9998 main.go:143] libmachine: unable to find current IP address of domain addons-462156 in network mk-addons-462156 (interfaces detected: [])
I1210 22:26:33.975490 9998 retry.go:31] will retry after 210.316166ms: waiting for domain to come up
I1210 22:26:34.187199 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:34.187840 9998 main.go:143] libmachine: no network interface addresses found for domain addons-462156 (source=lease)
I1210 22:26:34.187862 9998 main.go:143] libmachine: trying to list again with source=arp
I1210 22:26:34.188157 9998 main.go:143] libmachine: unable to find current IP address of domain addons-462156 in network mk-addons-462156 (interfaces detected: [])
I1210 22:26:34.188204 9998 retry.go:31] will retry after 289.237581ms: waiting for domain to come up
I1210 22:26:34.478636 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:34.479125 9998 main.go:143] libmachine: no network interface addresses found for domain addons-462156 (source=lease)
I1210 22:26:34.479141 9998 main.go:143] libmachine: trying to list again with source=arp
I1210 22:26:34.479469 9998 main.go:143] libmachine: unable to find current IP address of domain addons-462156 in network mk-addons-462156 (interfaces detected: [])
I1210 22:26:34.479508 9998 retry.go:31] will retry after 470.255734ms: waiting for domain to come up
I1210 22:26:34.950941 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:34.951449 9998 main.go:143] libmachine: no network interface addresses found for domain addons-462156 (source=lease)
I1210 22:26:34.951462 9998 main.go:143] libmachine: trying to list again with source=arp
I1210 22:26:34.951729 9998 main.go:143] libmachine: unable to find current IP address of domain addons-462156 in network mk-addons-462156 (interfaces detected: [])
I1210 22:26:34.951755 9998 retry.go:31] will retry after 467.929401ms: waiting for domain to come up
I1210 22:26:35.421550 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:35.422196 9998 main.go:143] libmachine: no network interface addresses found for domain addons-462156 (source=lease)
I1210 22:26:35.422217 9998 main.go:143] libmachine: trying to list again with source=arp
I1210 22:26:35.422566 9998 main.go:143] libmachine: unable to find current IP address of domain addons-462156 in network mk-addons-462156 (interfaces detected: [])
I1210 22:26:35.422607 9998 retry.go:31] will retry after 534.97958ms: waiting for domain to come up
I1210 22:26:35.959333 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:35.959812 9998 main.go:143] libmachine: no network interface addresses found for domain addons-462156 (source=lease)
I1210 22:26:35.959826 9998 main.go:143] libmachine: trying to list again with source=arp
I1210 22:26:35.960059 9998 main.go:143] libmachine: unable to find current IP address of domain addons-462156 in network mk-addons-462156 (interfaces detected: [])
I1210 22:26:35.960084 9998 retry.go:31] will retry after 624.235412ms: waiting for domain to come up
I1210 22:26:36.585972 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:36.586381 9998 main.go:143] libmachine: no network interface addresses found for domain addons-462156 (source=lease)
I1210 22:26:36.586408 9998 main.go:143] libmachine: trying to list again with source=arp
I1210 22:26:36.586719 9998 main.go:143] libmachine: unable to find current IP address of domain addons-462156 in network mk-addons-462156 (interfaces detected: [])
I1210 22:26:36.586752 9998 retry.go:31] will retry after 1.055332171s: waiting for domain to come up
I1210 22:26:37.643581 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:37.644206 9998 main.go:143] libmachine: no network interface addresses found for domain addons-462156 (source=lease)
I1210 22:26:37.644224 9998 main.go:143] libmachine: trying to list again with source=arp
I1210 22:26:37.644496 9998 main.go:143] libmachine: unable to find current IP address of domain addons-462156 in network mk-addons-462156 (interfaces detected: [])
I1210 22:26:37.644532 9998 retry.go:31] will retry after 1.103273366s: waiting for domain to come up
I1210 22:26:38.749677 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:38.750109 9998 main.go:143] libmachine: no network interface addresses found for domain addons-462156 (source=lease)
I1210 22:26:38.750124 9998 main.go:143] libmachine: trying to list again with source=arp
I1210 22:26:38.750368 9998 main.go:143] libmachine: unable to find current IP address of domain addons-462156 in network mk-addons-462156 (interfaces detected: [])
I1210 22:26:38.750395 9998 retry.go:31] will retry after 1.832613895s: waiting for domain to come up
I1210 22:26:40.585524 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:40.586170 9998 main.go:143] libmachine: no network interface addresses found for domain addons-462156 (source=lease)
I1210 22:26:40.586189 9998 main.go:143] libmachine: trying to list again with source=arp
I1210 22:26:40.586510 9998 main.go:143] libmachine: unable to find current IP address of domain addons-462156 in network mk-addons-462156 (interfaces detected: [])
I1210 22:26:40.586547 9998 retry.go:31] will retry after 1.876007042s: waiting for domain to come up
I1210 22:26:42.464650 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:42.465175 9998 main.go:143] libmachine: no network interface addresses found for domain addons-462156 (source=lease)
I1210 22:26:42.465189 9998 main.go:143] libmachine: trying to list again with source=arp
I1210 22:26:42.465447 9998 main.go:143] libmachine: unable to find current IP address of domain addons-462156 in network mk-addons-462156 (interfaces detected: [])
I1210 22:26:42.465478 9998 retry.go:31] will retry after 2.588292567s: waiting for domain to come up
I1210 22:26:45.057261 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:45.057821 9998 main.go:143] libmachine: no network interface addresses found for domain addons-462156 (source=lease)
I1210 22:26:45.057838 9998 main.go:143] libmachine: trying to list again with source=arp
I1210 22:26:45.058140 9998 main.go:143] libmachine: unable to find current IP address of domain addons-462156 in network mk-addons-462156 (interfaces detected: [])
I1210 22:26:45.058179 9998 retry.go:31] will retry after 2.592577244s: waiting for domain to come up
I1210 22:26:47.652009 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:47.652698 9998 main.go:143] libmachine: domain addons-462156 has current primary IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:47.652716 9998 main.go:143] libmachine: found domain IP: 192.168.39.89
I1210 22:26:47.652726 9998 main.go:143] libmachine: reserving static IP address...
I1210 22:26:47.653236 9998 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-462156", mac: "52:54:00:8c:7a:8f", ip: "192.168.39.89"} in network mk-addons-462156
I1210 22:26:47.827736 9998 main.go:143] libmachine: reserved static IP address 192.168.39.89 for domain addons-462156
I1210 22:26:47.827761 9998 main.go:143] libmachine: waiting for SSH...
I1210 22:26:47.827769 9998 main.go:143] libmachine: Getting to WaitForSSH function...
I1210 22:26:47.830361 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:47.830899 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8c:7a:8f}
I1210 22:26:47.830924 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:47.831132 9998 main.go:143] libmachine: Using SSH client type: native
I1210 22:26:47.831392 9998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.89 22 <nil> <nil>}
I1210 22:26:47.831404 9998 main.go:143] libmachine: About to run SSH command:
exit 0
I1210 22:26:47.947064 9998 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1210 22:26:47.947429 9998 main.go:143] libmachine: domain creation complete
I1210 22:26:47.949019 9998 machine.go:94] provisionDockerMachine start ...
I1210 22:26:47.951191 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:47.951607 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:26:47.951636 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:47.951791 9998 main.go:143] libmachine: Using SSH client type: native
I1210 22:26:47.952008 9998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.89 22 <nil> <nil>}
I1210 22:26:47.952021 9998 main.go:143] libmachine: About to run SSH command:
hostname
I1210 22:26:48.063022 9998 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
I1210 22:26:48.063053 9998 buildroot.go:166] provisioning hostname "addons-462156"
I1210 22:26:48.065786 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:48.066101 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:26:48.066149 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:48.066364 9998 main.go:143] libmachine: Using SSH client type: native
I1210 22:26:48.066580 9998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.89 22 <nil> <nil>}
I1210 22:26:48.066592 9998 main.go:143] libmachine: About to run SSH command:
sudo hostname addons-462156 && echo "addons-462156" | sudo tee /etc/hostname
I1210 22:26:48.195095 9998 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-462156
I1210 22:26:48.197631 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:48.198177 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:26:48.198203 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:48.198352 9998 main.go:143] libmachine: Using SSH client type: native
I1210 22:26:48.198553 9998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.89 22 <nil> <nil>}
I1210 22:26:48.198586 9998 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-462156' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-462156/g' /etc/hosts;
else
echo '127.0.1.1 addons-462156' | sudo tee -a /etc/hosts;
fi
fi
I1210 22:26:48.334882 9998 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1210 22:26:48.334908 9998 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22061-5125/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-5125/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-5125/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-5125/.minikube}
I1210 22:26:48.334924 9998 buildroot.go:174] setting up certificates
I1210 22:26:48.334936 9998 provision.go:84] configureAuth start
I1210 22:26:48.337577 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:48.337943 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:26:48.337972 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:48.340138 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:48.340472 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:26:48.340494 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:48.340628 9998 provision.go:143] copyHostCerts
I1210 22:26:48.340704 9998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5125/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-5125/.minikube/ca.pem (1078 bytes)
I1210 22:26:48.340848 9998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5125/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-5125/.minikube/cert.pem (1123 bytes)
I1210 22:26:48.341001 9998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-5125/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-5125/.minikube/key.pem (1675 bytes)
I1210 22:26:48.341099 9998 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-5125/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-5125/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-5125/.minikube/certs/ca-key.pem org=jenkins.addons-462156 san=[127.0.0.1 192.168.39.89 addons-462156 localhost minikube]
I1210 22:26:48.404755 9998 provision.go:177] copyRemoteCerts
I1210 22:26:48.404810 9998 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1210 22:26:48.407209 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:48.407618 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:26:48.407640 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:48.407835 9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
I1210 22:26:48.495876 9998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1210 22:26:48.524015 9998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1210 22:26:48.552663 9998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1210 22:26:48.580287 9998 provision.go:87] duration metric: took 245.338206ms to configureAuth
I1210 22:26:48.580316 9998 buildroot.go:189] setting minikube options for container-runtime
I1210 22:26:48.580524 9998 config.go:182] Loaded profile config "addons-462156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 22:26:48.583299 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:48.583702 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:26:48.583733 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:48.583902 9998 main.go:143] libmachine: Using SSH client type: native
I1210 22:26:48.584124 9998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.89 22 <nil> <nil>}
I1210 22:26:48.584144 9998 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1210 22:26:48.839178 9998 main.go:143] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1210 22:26:48.839232 9998 machine.go:97] duration metric: took 890.169875ms to provisionDockerMachine
I1210 22:26:48.839259 9998 client.go:176] duration metric: took 16.619015839s to LocalClient.Create
I1210 22:26:48.839284 9998 start.go:167] duration metric: took 16.619073728s to libmachine.API.Create "addons-462156"
I1210 22:26:48.839298 9998 start.go:293] postStartSetup for "addons-462156" (driver="kvm2")
I1210 22:26:48.839310 9998 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1210 22:26:48.839379 9998 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1210 22:26:48.842291 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:48.842861 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:26:48.842890 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:48.843052 9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
I1210 22:26:48.930251 9998 ssh_runner.go:195] Run: cat /etc/os-release
I1210 22:26:48.935271 9998 info.go:137] Remote host: Buildroot 2025.02
I1210 22:26:48.935303 9998 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5125/.minikube/addons for local assets ...
I1210 22:26:48.935380 9998 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-5125/.minikube/files for local assets ...
I1210 22:26:48.935407 9998 start.go:296] duration metric: took 96.102593ms for postStartSetup
I1210 22:26:48.938720 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:48.939164 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:26:48.939199 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:48.939477 9998 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/config.json ...
I1210 22:26:48.939681 9998 start.go:128] duration metric: took 16.721000925s to createHost
I1210 22:26:48.942167 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:48.942566 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:26:48.942588 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:48.942720 9998 main.go:143] libmachine: Using SSH client type: native
I1210 22:26:48.942905 9998 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.89 22 <nil> <nil>}
I1210 22:26:48.942914 9998 main.go:143] libmachine: About to run SSH command:
date +%s.%N
I1210 22:26:49.054480 9998 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765405609.010460701
I1210 22:26:49.054510 9998 fix.go:216] guest clock: 1765405609.010460701
I1210 22:26:49.054536 9998 fix.go:229] Guest: 2025-12-10 22:26:49.010460701 +0000 UTC Remote: 2025-12-10 22:26:48.939693781 +0000 UTC m=+16.815037594 (delta=70.76692ms)
I1210 22:26:49.054554 9998 fix.go:200] guest clock delta is within tolerance: 70.76692ms
I1210 22:26:49.054558 9998 start.go:83] releasing machines lock for "addons-462156", held for 16.835944852s
I1210 22:26:49.057406 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:49.057816 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:26:49.057842 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:49.058352 9998 ssh_runner.go:195] Run: cat /version.json
I1210 22:26:49.058473 9998 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1210 22:26:49.061562 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:49.061946 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:26:49.061968 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:49.062014 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:49.062124 9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
I1210 22:26:49.062569 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:26:49.062607 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:49.062777 9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
I1210 22:26:49.143063 9998 ssh_runner.go:195] Run: systemctl --version
I1210 22:26:49.179976 9998 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1210 22:26:49.339734 9998 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1210 22:26:49.347605 9998 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1210 22:26:49.347664 9998 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1210 22:26:49.368076 9998 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1210 22:26:49.368102 9998 start.go:496] detecting cgroup driver to use...
I1210 22:26:49.368159 9998 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1210 22:26:49.392091 9998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1210 22:26:49.412400 9998 docker.go:218] disabling cri-docker service (if available) ...
I1210 22:26:49.412475 9998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1210 22:26:49.430362 9998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1210 22:26:49.446615 9998 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1210 22:26:49.589065 9998 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1210 22:26:49.804627 9998 docker.go:234] disabling docker service ...
I1210 22:26:49.804687 9998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1210 22:26:49.821191 9998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1210 22:26:49.836216 9998 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1210 22:26:49.991961 9998 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1210 22:26:50.134399 9998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1210 22:26:50.150200 9998 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1210 22:26:50.175284 9998 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1210 22:26:50.175368 9998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1210 22:26:50.188693 9998 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1210 22:26:50.188756 9998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1210 22:26:50.201474 9998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1210 22:26:50.214476 9998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1210 22:26:50.227100 9998 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1210 22:26:50.240186 9998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1210 22:26:50.252323 9998 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1210 22:26:50.274866 9998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1210 22:26:50.287289 9998 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1210 22:26:50.299059 9998 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1210 22:26:50.299116 9998 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1210 22:26:50.320730 9998 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1210 22:26:50.333897 9998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1210 22:26:50.472201 9998 ssh_runner.go:195] Run: sudo systemctl restart crio
I1210 22:26:50.582759 9998 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1210 22:26:50.582873 9998 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1210 22:26:50.588012 9998 start.go:564] Will wait 60s for crictl version
I1210 22:26:50.588081 9998 ssh_runner.go:195] Run: which crictl
I1210 22:26:50.592091 9998 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1210 22:26:50.627114 9998 start.go:580] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1210 22:26:50.627262 9998 ssh_runner.go:195] Run: crio --version
I1210 22:26:50.655008 9998 ssh_runner.go:195] Run: crio --version
I1210 22:26:50.686270 9998 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
I1210 22:26:50.689706 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:50.690065 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:26:50.690089 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:26:50.690254 9998 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1210 22:26:50.694646 9998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1210 22:26:50.708902 9998 kubeadm.go:884] updating cluster {Name:addons-462156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.34.2 ClusterName:addons-462156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1210 22:26:50.709011 9998 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1210 22:26:50.709058 9998 ssh_runner.go:195] Run: sudo crictl images --output json
I1210 22:26:50.736281 9998 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
I1210 22:26:50.736344 9998 ssh_runner.go:195] Run: which lz4
I1210 22:26:50.740585 9998 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1210 22:26:50.745107 9998 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1210 22:26:50.745135 9998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
I1210 22:26:51.855223 9998 crio.go:462] duration metric: took 1.114670573s to copy over tarball
I1210 22:26:51.855292 9998 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1210 22:26:53.372246 9998 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.516920498s)
I1210 22:26:53.372269 9998 crio.go:469] duration metric: took 1.517018571s to extract the tarball
I1210 22:26:53.372279 9998 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1210 22:26:53.407828 9998 ssh_runner.go:195] Run: sudo crictl images --output json
I1210 22:26:53.449051 9998 crio.go:514] all images are preloaded for cri-o runtime.
I1210 22:26:53.449072 9998 cache_images.go:86] Images are preloaded, skipping loading
I1210 22:26:53.449078 9998 kubeadm.go:935] updating node { 192.168.39.89 8443 v1.34.2 crio true true} ...
I1210 22:26:53.449154 9998 kubeadm.go:947] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-462156 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
[Install]
config:
{KubernetesVersion:v1.34.2 ClusterName:addons-462156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1210 22:26:53.449260 9998 ssh_runner.go:195] Run: crio config
I1210 22:26:53.495717 9998 cni.go:84] Creating CNI manager for ""
I1210 22:26:53.495778 9998 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1210 22:26:53.495815 9998 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1210 22:26:53.495873 9998 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.89 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-462156 NodeName:addons-462156 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1210 22:26:53.496175 9998 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.89
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-462156"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.89"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.89"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1210 22:26:53.496276 9998 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
I1210 22:26:53.509465 9998 binaries.go:51] Found k8s binaries, skipping transfer
I1210 22:26:53.509535 9998 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1210 22:26:53.520969 9998 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I1210 22:26:53.541077 9998 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1210 22:26:53.560681 9998 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
I1210 22:26:53.581579 9998 ssh_runner.go:195] Run: grep 192.168.39.89 control-plane.minikube.internal$ /etc/hosts
I1210 22:26:53.585612 9998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.89 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1210 22:26:53.599863 9998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1210 22:26:53.747927 9998 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1210 22:26:53.781143 9998 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156 for IP: 192.168.39.89
I1210 22:26:53.781169 9998 certs.go:195] generating shared ca certs ...
I1210 22:26:53.781185 9998 certs.go:227] acquiring lock for ca certs: {Name:mkea05d5a03ad9931f0e4f58a8f8d8a307addad1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 22:26:53.781314 9998 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-5125/.minikube/ca.key
I1210 22:26:53.854417 9998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5125/.minikube/ca.crt ...
I1210 22:26:53.854451 9998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/.minikube/ca.crt: {Name:mka2b739e386ec9988f2978e08f700a007b1aaa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 22:26:53.854620 9998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5125/.minikube/ca.key ...
I1210 22:26:53.854631 9998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/.minikube/ca.key: {Name:mk96567aa363f44c5e4bb3d596fdd02a58c35fbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 22:26:53.854717 9998 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-5125/.minikube/proxy-client-ca.key
I1210 22:26:53.891801 9998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5125/.minikube/proxy-client-ca.crt ...
I1210 22:26:53.891822 9998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/.minikube/proxy-client-ca.crt: {Name:mk264fdf7005b89cf2b12bffa5bd551cd8f9b8c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 22:26:53.891969 9998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5125/.minikube/proxy-client-ca.key ...
I1210 22:26:53.891981 9998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/.minikube/proxy-client-ca.key: {Name:mk55cd9d85b85d4fa27aa5825b03156606fb26fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 22:26:53.892047 9998 certs.go:257] generating profile certs ...
I1210 22:26:53.892099 9998 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.key
I1210 22:26:53.892121 9998 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt with IP's: []
I1210 22:26:54.077166 9998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt ...
I1210 22:26:54.077193 9998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.crt: {Name:mkccad67cc705bb7c6228d7393e2d18a87f92ed1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 22:26:54.077360 9998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.key ...
I1210 22:26:54.077372 9998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/client.key: {Name:mk221bcde58651631aa74395b3ed7c76a192e171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 22:26:54.077951 9998 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/apiserver.key.90118dbb
I1210 22:26:54.077974 9998 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/apiserver.crt.90118dbb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.89]
I1210 22:26:54.207953 9998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/apiserver.crt.90118dbb ...
I1210 22:26:54.207979 9998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/apiserver.crt.90118dbb: {Name:mkf9e4d36e9bfce0ff658089c69123e4aee1e819 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 22:26:54.208126 9998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/apiserver.key.90118dbb ...
I1210 22:26:54.208140 9998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/apiserver.key.90118dbb: {Name:mk0c5a74b707644ae88eeb264b79701a440d00cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 22:26:54.208207 9998 certs.go:382] copying /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/apiserver.crt.90118dbb -> /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/apiserver.crt
I1210 22:26:54.208274 9998 certs.go:386] copying /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/apiserver.key.90118dbb -> /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/apiserver.key
I1210 22:26:54.208316 9998 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/proxy-client.key
I1210 22:26:54.208334 9998 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/proxy-client.crt with IP's: []
I1210 22:26:54.448685 9998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/proxy-client.crt ...
I1210 22:26:54.448713 9998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/proxy-client.crt: {Name:mk1c5213d9313745a64deed022c18a32542a6972 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 22:26:54.448884 9998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/proxy-client.key ...
I1210 22:26:54.448897 9998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/proxy-client.key: {Name:mk2e78c1fe91d458f8a06a11151c39f823e990b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 22:26:54.449056 9998 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5125/.minikube/certs/ca-key.pem (1679 bytes)
I1210 22:26:54.449092 9998 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5125/.minikube/certs/ca.pem (1078 bytes)
I1210 22:26:54.449118 9998 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5125/.minikube/certs/cert.pem (1123 bytes)
I1210 22:26:54.449140 9998 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-5125/.minikube/certs/key.pem (1675 bytes)
I1210 22:26:54.449695 9998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1210 22:26:54.488985 9998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1210 22:26:54.529665 9998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1210 22:26:54.558981 9998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1210 22:26:54.587754 9998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1210 22:26:54.616075 9998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1210 22:26:54.644579 9998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1210 22:26:54.673227 9998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/profiles/addons-462156/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1210 22:26:54.702403 9998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-5125/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1210 22:26:54.732947 9998 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1210 22:26:54.752250 9998 ssh_runner.go:195] Run: openssl version
I1210 22:26:54.758248 9998 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1210 22:26:54.769416 9998 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1210 22:26:54.780660 9998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1210 22:26:54.785641 9998 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 22:26 /usr/share/ca-certificates/minikubeCA.pem
I1210 22:26:54.785691 9998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1210 22:26:54.792734 9998 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1210 22:26:54.803510 9998 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1210 22:26:54.814404 9998 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1210 22:26:54.818937 9998 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1210 22:26:54.818991 9998 kubeadm.go:401] StartCluster: {Name:addons-462156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.34.2 ClusterName:addons-462156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1210 22:26:54.819059 9998 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1210 22:26:54.819132 9998 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1210 22:26:54.851100 9998 cri.go:89] found id: ""
I1210 22:26:54.851172 9998 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1210 22:26:54.863054 9998 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1210 22:26:54.874575 9998 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1210 22:26:54.885891 9998 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1210 22:26:54.885909 9998 kubeadm.go:158] found existing configuration files:
I1210 22:26:54.885961 9998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1210 22:26:54.896727 9998 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1210 22:26:54.896792 9998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1210 22:26:54.908147 9998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1210 22:26:54.918671 9998 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1210 22:26:54.918739 9998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1210 22:26:54.930112 9998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1210 22:26:54.940879 9998 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1210 22:26:54.940934 9998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1210 22:26:54.952251 9998 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1210 22:26:54.963133 9998 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1210 22:26:54.963194 9998 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1210 22:26:54.974336 9998 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1210 22:26:55.022873 9998 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
I1210 22:26:55.022930 9998 kubeadm.go:319] [preflight] Running pre-flight checks
I1210 22:26:55.115925 9998 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1210 22:26:55.116102 9998 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1210 22:26:55.116240 9998 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1210 22:26:55.127231 9998 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1210 22:26:55.129521 9998 out.go:252] - Generating certificates and keys ...
I1210 22:26:55.129609 9998 kubeadm.go:319] [certs] Using existing ca certificate authority
I1210 22:26:55.129701 9998 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1210 22:26:55.390215 9998 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1210 22:26:56.031678 9998 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1210 22:26:56.137282 9998 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1210 22:26:56.517946 9998 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1210 22:26:57.004227 9998 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1210 22:26:57.004464 9998 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-462156 localhost] and IPs [192.168.39.89 127.0.0.1 ::1]
I1210 22:26:57.182564 9998 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1210 22:26:57.182743 9998 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-462156 localhost] and IPs [192.168.39.89 127.0.0.1 ::1]
I1210 22:26:57.500819 9998 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1210 22:26:57.716287 9998 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1210 22:26:57.793897 9998 kubeadm.go:319] [certs] Generating "sa" key and public key
I1210 22:26:57.793965 9998 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1210 22:26:57.841213 9998 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1210 22:26:57.979702 9998 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1210 22:26:58.059939 9998 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1210 22:26:58.197526 9998 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1210 22:26:58.379052 9998 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1210 22:26:58.379333 9998 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1210 22:26:58.381801 9998 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1210 22:26:58.387611 9998 out.go:252] - Booting up control plane ...
I1210 22:26:58.387732 9998 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1210 22:26:58.387833 9998 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1210 22:26:58.387916 9998 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1210 22:26:58.404591 9998 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1210 22:26:58.404706 9998 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1210 22:26:58.411574 9998 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1210 22:26:58.411715 9998 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1210 22:26:58.411776 9998 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1210 22:26:58.558107 9998 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1210 22:26:58.558240 9998 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1210 22:27:00.057561 9998 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501546324s
I1210 22:27:00.060002 9998 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1210 22:27:00.060122 9998 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.89:8443/livez
I1210 22:27:00.060215 9998 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1210 22:27:00.060285 9998 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1210 22:27:02.064112 9998 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.006055079s
I1210 22:27:03.323612 9998 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.267113624s
I1210 22:27:05.561073 9998 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.504855888s
I1210 22:27:05.581240 9998 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1210 22:27:05.598086 9998 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1210 22:27:05.613522 9998 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1210 22:27:05.613771 9998 kubeadm.go:319] [mark-control-plane] Marking the node addons-462156 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1210 22:27:05.627696 9998 kubeadm.go:319] [bootstrap-token] Using token: 0h1f2a.ay6wbb2g4r1dwsjt
I1210 22:27:05.629148 9998 out.go:252] - Configuring RBAC rules ...
I1210 22:27:05.629264 9998 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1210 22:27:05.635704 9998 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1210 22:27:05.648556 9998 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1210 22:27:05.658081 9998 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1210 22:27:05.662264 9998 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1210 22:27:05.666680 9998 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1210 22:27:05.966472 9998 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1210 22:27:06.411935 9998 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1210 22:27:06.965878 9998 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1210 22:27:06.967563 9998 kubeadm.go:319]
I1210 22:27:06.967634 9998 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1210 22:27:06.967660 9998 kubeadm.go:319]
I1210 22:27:06.967738 9998 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1210 22:27:06.967746 9998 kubeadm.go:319]
I1210 22:27:06.967768 9998 kubeadm.go:319] mkdir -p $HOME/.kube
I1210 22:27:06.967823 9998 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1210 22:27:06.967902 9998 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1210 22:27:06.967915 9998 kubeadm.go:319]
I1210 22:27:06.967978 9998 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1210 22:27:06.967986 9998 kubeadm.go:319]
I1210 22:27:06.968034 9998 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1210 22:27:06.968043 9998 kubeadm.go:319]
I1210 22:27:06.968139 9998 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1210 22:27:06.968250 9998 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1210 22:27:06.968345 9998 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1210 22:27:06.968355 9998 kubeadm.go:319]
I1210 22:27:06.968494 9998 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1210 22:27:06.968605 9998 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1210 22:27:06.968615 9998 kubeadm.go:319]
I1210 22:27:06.968749 9998 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0h1f2a.ay6wbb2g4r1dwsjt \
I1210 22:27:06.968900 9998 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:fd318d48817654ae7d58380c81fceba02f616127cf15d0ed84bb8d49ffe71ffb \
I1210 22:27:06.968942 9998 kubeadm.go:319] --control-plane
I1210 22:27:06.968952 9998 kubeadm.go:319]
I1210 22:27:06.969094 9998 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1210 22:27:06.969112 9998 kubeadm.go:319]
I1210 22:27:06.969225 9998 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0h1f2a.ay6wbb2g4r1dwsjt \
I1210 22:27:06.969376 9998 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:fd318d48817654ae7d58380c81fceba02f616127cf15d0ed84bb8d49ffe71ffb
I1210 22:27:06.969661 9998 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1210 22:27:06.969685 9998 cni.go:84] Creating CNI manager for ""
I1210 22:27:06.969697 9998 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1210 22:27:06.971833 9998 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1210 22:27:06.972967 9998 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1210 22:27:06.985553 9998 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1210 22:27:07.009699 9998 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1210 22:27:07.009765 9998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1210 22:27:07.009868 9998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-462156 minikube.k8s.io/updated_at=2025_12_10T22_27_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6 minikube.k8s.io/name=addons-462156 minikube.k8s.io/primary=true
I1210 22:27:07.155580 9998 ops.go:34] apiserver oom_adj: -16
I1210 22:27:07.155642 9998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1210 22:27:07.656067 9998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1210 22:27:08.155995 9998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1210 22:27:08.655769 9998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1210 22:27:09.155882 9998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1210 22:27:09.655794 9998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1210 22:27:10.156090 9998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1210 22:27:10.656720 9998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1210 22:27:11.155700 9998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1210 22:27:11.239783 9998 kubeadm.go:1114] duration metric: took 4.230082645s to wait for elevateKubeSystemPrivileges
I1210 22:27:11.239817 9998 kubeadm.go:403] duration metric: took 16.420832459s to StartCluster
I1210 22:27:11.239834 9998 settings.go:142] acquiring lock: {Name:mkb6311113a1595706e930e5ec066489475d2931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 22:27:11.239972 9998 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22061-5125/kubeconfig
I1210 22:27:11.240347 9998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-5125/kubeconfig: {Name:mkc997741ee5522db4814beb6df9db1a27fdfa83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 22:27:11.240609 9998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1210 22:27:11.240640 9998 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.89 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1210 22:27:11.240689 9998 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1210 22:27:11.240809 9998 addons.go:70] Setting yakd=true in profile "addons-462156"
I1210 22:27:11.240831 9998 addons.go:239] Setting addon yakd=true in "addons-462156"
I1210 22:27:11.240854 9998 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-462156"
I1210 22:27:11.240871 9998 addons.go:70] Setting default-storageclass=true in profile "addons-462156"
I1210 22:27:11.240880 9998 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-462156"
I1210 22:27:11.240890 9998 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-462156"
I1210 22:27:11.240894 9998 addons.go:70] Setting ingress-dns=true in profile "addons-462156"
I1210 22:27:11.240898 9998 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-462156"
I1210 22:27:11.240906 9998 addons.go:239] Setting addon ingress-dns=true in "addons-462156"
I1210 22:27:11.240921 9998 config.go:182] Loaded profile config "addons-462156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 22:27:11.240941 9998 host.go:66] Checking if "addons-462156" exists ...
I1210 22:27:11.240950 9998 host.go:66] Checking if "addons-462156" exists ...
I1210 22:27:11.240971 9998 addons.go:70] Setting storage-provisioner=true in profile "addons-462156"
I1210 22:27:11.240986 9998 addons.go:239] Setting addon storage-provisioner=true in "addons-462156"
I1210 22:27:11.240995 9998 addons.go:70] Setting gcp-auth=true in profile "addons-462156"
I1210 22:27:11.241015 9998 addons.go:70] Setting registry=true in profile "addons-462156"
I1210 22:27:11.241025 9998 addons.go:239] Setting addon registry=true in "addons-462156"
I1210 22:27:11.241027 9998 mustload.go:66] Loading cluster: addons-462156
I1210 22:27:11.241041 9998 host.go:66] Checking if "addons-462156" exists ...
I1210 22:27:11.241066 9998 addons.go:70] Setting inspektor-gadget=true in profile "addons-462156"
I1210 22:27:11.241092 9998 addons.go:239] Setting addon inspektor-gadget=true in "addons-462156"
I1210 22:27:11.241120 9998 host.go:66] Checking if "addons-462156" exists ...
I1210 22:27:11.241196 9998 config.go:182] Loaded profile config "addons-462156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 22:27:11.241405 9998 addons.go:70] Setting registry-creds=true in profile "addons-462156"
I1210 22:27:11.241420 9998 addons.go:239] Setting addon registry-creds=true in "addons-462156"
I1210 22:27:11.241457 9998 host.go:66] Checking if "addons-462156" exists ...
I1210 22:27:11.241968 9998 addons.go:70] Setting metrics-server=true in profile "addons-462156"
I1210 22:27:11.242042 9998 addons.go:239] Setting addon metrics-server=true in "addons-462156"
I1210 22:27:11.242104 9998 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-462156"
I1210 22:27:11.242144 9998 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-462156"
I1210 22:27:11.242223 9998 addons.go:70] Setting volcano=true in profile "addons-462156"
I1210 22:27:11.242294 9998 addons.go:239] Setting addon volcano=true in "addons-462156"
I1210 22:27:11.242339 9998 host.go:66] Checking if "addons-462156" exists ...
I1210 22:27:11.242423 9998 addons.go:70] Setting ingress=true in profile "addons-462156"
I1210 22:27:11.240861 9998 host.go:66] Checking if "addons-462156" exists ...
I1210 22:27:11.242463 9998 addons.go:239] Setting addon ingress=true in "addons-462156"
I1210 22:27:11.242499 9998 host.go:66] Checking if "addons-462156" exists ...
I1210 22:27:11.242617 9998 addons.go:70] Setting volumesnapshots=true in profile "addons-462156"
I1210 22:27:11.242636 9998 addons.go:239] Setting addon volumesnapshots=true in "addons-462156"
I1210 22:27:11.242658 9998 host.go:66] Checking if "addons-462156" exists ...
I1210 22:27:11.242907 9998 out.go:179] * Verifying Kubernetes components...
I1210 22:27:11.241006 9998 host.go:66] Checking if "addons-462156" exists ...
I1210 22:27:11.240811 9998 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-462156"
I1210 22:27:11.243181 9998 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-462156"
I1210 22:27:11.243215 9998 host.go:66] Checking if "addons-462156" exists ...
I1210 22:27:11.242109 9998 host.go:66] Checking if "addons-462156" exists ...
I1210 22:27:11.240883 9998 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-462156"
I1210 22:27:11.243388 9998 host.go:66] Checking if "addons-462156" exists ...
I1210 22:27:11.240853 9998 addons.go:70] Setting cloud-spanner=true in profile "addons-462156"
I1210 22:27:11.243765 9998 addons.go:239] Setting addon cloud-spanner=true in "addons-462156"
I1210 22:27:11.243786 9998 host.go:66] Checking if "addons-462156" exists ...
I1210 22:27:11.244489 9998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1210 22:27:11.247678 9998 host.go:66] Checking if "addons-462156" exists ...
I1210 22:27:11.247732 9998 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1210 22:27:11.248674 9998 addons.go:239] Setting addon default-storageclass=true in "addons-462156"
I1210 22:27:11.248708 9998 host.go:66] Checking if "addons-462156" exists ...
I1210 22:27:11.250317 9998 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1210 22:27:11.250332 9998 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
I1210 22:27:11.250356 9998 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
I1210 22:27:11.250380 9998 out.go:179] - Using image docker.io/registry:3.0.0
I1210 22:27:11.250336 9998 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-462156"
I1210 22:27:11.250907 9998 host.go:66] Checking if "addons-462156" exists ...
I1210 22:27:11.250390 9998 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
W1210 22:27:11.251376 9998 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1210 22:27:11.252133 9998 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1210 22:27:11.252151 9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1210 22:27:11.252978 9998 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1210 22:27:11.252982 9998 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1210 22:27:11.253371 9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1210 22:27:11.252986 9998 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
I1210 22:27:11.253459 9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1210 22:27:11.253012 9998 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
I1210 22:27:11.253577 9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
I1210 22:27:11.253022 9998 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1210 22:27:11.253645 9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1210 22:27:11.252937 9998 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.5
I1210 22:27:11.253836 9998 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1210 22:27:11.253838 9998 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1210 22:27:11.253838 9998 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1210 22:27:11.253846 9998 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1210 22:27:11.253870 9998 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1210 22:27:11.254705 9998 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
I1210 22:27:11.255089 9998 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1210 22:27:11.254193 9998 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1210 22:27:11.255182 9998 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1210 22:27:11.255510 9998 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1210 22:27:11.255536 9998 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1210 22:27:11.255555 9998 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1210 22:27:11.255567 9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1210 22:27:11.255519 9998 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
I1210 22:27:11.256267 9998 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1210 22:27:11.256286 9998 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1210 22:27:11.256307 9998 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1210 22:27:11.256361 9998 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1210 22:27:11.256373 9998 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1210 22:27:11.256731 9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1210 22:27:11.257028 9998 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
I1210 22:27:11.257045 9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1210 22:27:11.257762 9998 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1210 22:27:11.258958 9998 out.go:179] - Using image docker.io/busybox:stable
I1210 22:27:11.259053 9998 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
I1210 22:27:11.260807 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.261050 9998 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1210 22:27:11.261065 9998 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1210 22:27:11.261068 9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1210 22:27:11.262118 9998 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1210 22:27:11.262201 9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1210 22:27:11.262898 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:27:11.262942 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.263683 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.263860 9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
I1210 22:27:11.264153 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.264182 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.264336 9998 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1210 22:27:11.264877 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.265499 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:27:11.265560 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.265662 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:27:11.265726 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:27:11.265759 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.265886 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.265951 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:27:11.265999 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.266220 9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
I1210 22:27:11.266300 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.266650 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.266787 9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
I1210 22:27:11.266888 9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
I1210 22:27:11.267000 9998 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1210 22:27:11.267239 9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
I1210 22:27:11.267990 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.268091 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.268174 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:27:11.268203 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.268327 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:27:11.268353 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.268458 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.268563 9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
I1210 22:27:11.268852 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.268962 9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
I1210 22:27:11.269182 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:27:11.269211 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.269278 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:27:11.269315 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.269532 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.269565 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:27:11.269592 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.269746 9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
I1210 22:27:11.269841 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:27:11.269861 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.269883 9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
I1210 22:27:11.269948 9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
I1210 22:27:11.269974 9998 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1210 22:27:11.270343 9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
I1210 22:27:11.270714 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:27:11.270745 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.270961 9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
I1210 22:27:11.271307 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.271763 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:27:11.271777 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.271800 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.272019 9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
I1210 22:27:11.272306 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:27:11.272333 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.272394 9998 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1210 22:27:11.272514 9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
I1210 22:27:11.274835 9998 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1210 22:27:11.275968 9998 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1210 22:27:11.275984 9998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1210 22:27:11.278040 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.278526 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:27:11.278556 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:11.278707 9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
I1210 22:27:12.083092 9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1210 22:27:12.089384 9998 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1210 22:27:12.089415 9998 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1210 22:27:12.108603 9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1210 22:27:12.110094 9998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1210 22:27:12.110117 9998 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1210 22:27:12.116697 9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1210 22:27:12.147041 9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1210 22:27:12.160103 9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1210 22:27:12.175353 9998 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
I1210 22:27:12.175382 9998 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1210 22:27:12.210459 9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1210 22:27:12.227717 9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
I1210 22:27:12.346473 9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1210 22:27:12.357239 9998 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
I1210 22:27:12.357265 9998 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1210 22:27:12.365743 9998 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1210 22:27:12.365764 9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1210 22:27:12.376747 9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1210 22:27:12.378130 9998 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1210 22:27:12.378149 9998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1210 22:27:12.595045 9998 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1210 22:27:12.595074 9998 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1210 22:27:12.623616 9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1210 22:27:12.848988 9998 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
I1210 22:27:12.849018 9998 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1210 22:27:12.950376 9998 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
I1210 22:27:12.950405 9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1210 22:27:13.009501 9998 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1210 22:27:13.009534 9998 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1210 22:27:13.087689 9998 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1210 22:27:13.087720 9998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1210 22:27:13.368087 9998 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1210 22:27:13.368114 9998 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1210 22:27:13.460638 9998 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
I1210 22:27:13.460663 9998 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1210 22:27:13.521676 9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1210 22:27:13.560893 9998 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1210 22:27:13.560918 9998 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1210 22:27:13.636025 9998 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1210 22:27:13.636053 9998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1210 22:27:13.773157 9998 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
I1210 22:27:13.773178 9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1210 22:27:13.845003 9998 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1210 22:27:13.845046 9998 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1210 22:27:13.931408 9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1210 22:27:14.052334 9998 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1210 22:27:14.052359 9998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1210 22:27:14.159970 9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1210 22:27:14.181920 9998 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1210 22:27:14.181948 9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1210 22:27:14.410122 9998 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1210 22:27:14.410153 9998 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1210 22:27:14.496152 9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1210 22:27:14.808310 9998 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1210 22:27:14.808334 9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1210 22:27:15.137188 9998 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1210 22:27:15.137214 9998 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1210 22:27:15.748712 9998 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1210 22:27:15.748737 9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1210 22:27:16.126262 9998 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1210 22:27:16.126288 9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1210 22:27:16.461394 9998 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1210 22:27:16.461430 9998 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1210 22:27:16.738249 9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1210 22:27:18.779730 9998 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1210 22:27:18.782680 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:18.783156 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:27:18.783187 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:18.783345 9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
I1210 22:27:19.258012 9998 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1210 22:27:19.417390 9998 addons.go:239] Setting addon gcp-auth=true in "addons-462156"
I1210 22:27:19.417470 9998 host.go:66] Checking if "addons-462156" exists ...
I1210 22:27:19.419693 9998 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1210 22:27:19.422550 9998 main.go:143] libmachine: domain addons-462156 has defined MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:19.423074 9998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:7a:8f", ip: ""} in network mk-addons-462156: {Iface:virbr1 ExpiryTime:2025-12-10 23:26:47 +0000 UTC Type:0 Mac:52:54:00:8c:7a:8f Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-462156 Clientid:01:52:54:00:8c:7a:8f}
I1210 22:27:19.423110 9998 main.go:143] libmachine: domain addons-462156 has defined IP address 192.168.39.89 and MAC address 52:54:00:8c:7a:8f in network mk-addons-462156
I1210 22:27:19.423324 9998 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22061-5125/.minikube/machines/addons-462156/id_rsa Username:docker}
I1210 22:27:20.295252 9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.212128045s)
I1210 22:27:20.295284 9998 addons.go:495] Verifying addon ingress=true in "addons-462156"
I1210 22:27:20.295364 9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.186732293s)
I1210 22:27:20.295516 9998 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.185363131s)
I1210 22:27:20.295558 9998 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.18542805s)
I1210 22:27:20.295577 9998 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1210 22:27:20.295644 9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.17891331s)
I1210 22:27:20.295707 9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.14863982s)
I1210 22:27:20.295780 9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.135648198s)
I1210 22:27:20.295808 9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.085322242s)
I1210 22:27:20.295924 9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (8.068145496s)
I1210 22:27:20.295974 9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.949470459s)
I1210 22:27:20.295996 9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.919218159s)
I1210 22:27:20.296062 9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.672421397s)
I1210 22:27:20.296106 9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.774400599s)
I1210 22:27:20.296128 9998 addons.go:495] Verifying addon registry=true in "addons-462156"
I1210 22:27:20.296342 9998 node_ready.go:35] waiting up to 6m0s for node "addons-462156" to be "Ready" ...
I1210 22:27:20.296248 9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.364802739s)
I1210 22:27:20.296389 9998 addons.go:495] Verifying addon metrics-server=true in "addons-462156"
I1210 22:27:20.296340 9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.136341258s)
I1210 22:27:20.296915 9998 out.go:179] * Verifying ingress addon...
I1210 22:27:20.297870 9998 out.go:179] * Verifying registry addon...
I1210 22:27:20.297878 9998 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-462156 service yakd-dashboard -n yakd-dashboard
I1210 22:27:20.299836 9998 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1210 22:27:20.300148 9998 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1210 22:27:20.329088 9998 node_ready.go:49] node "addons-462156" is "Ready"
I1210 22:27:20.329112 9998 node_ready.go:38] duration metric: took 32.747065ms for node "addons-462156" to be "Ready" ...
I1210 22:27:20.329124 9998 api_server.go:52] waiting for apiserver process to appear ...
I1210 22:27:20.329176 9998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1210 22:27:20.333537 9998 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1210 22:27:20.333558 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:20.333646 9998 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1210 22:27:20.333664 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
W1210 22:27:20.384940 9998 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I1210 22:27:20.577474 9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.081252281s)
W1210 22:27:20.577529 9998 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1210 22:27:20.577562 9998 retry.go:31] will retry after 230.344397ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1210 22:27:20.802239 9998 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-462156" context rescaled to 1 replicas
I1210 22:27:20.808033 9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1210 22:27:20.808844 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:20.813250 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:21.392551 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:21.396505 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:21.438815 9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.700515364s)
I1210 22:27:21.438856 9998 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.019132439s)
I1210 22:27:21.438884 9998 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.109690734s)
I1210 22:27:21.438902 9998 api_server.go:72] duration metric: took 10.198235938s to wait for apiserver process to appear ...
I1210 22:27:21.438909 9998 api_server.go:88] waiting for apiserver healthz status ...
I1210 22:27:21.438927 9998 api_server.go:253] Checking apiserver healthz at https://192.168.39.89:8443/healthz ...
I1210 22:27:21.438859 9998 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-462156"
I1210 22:27:21.440935 9998 out.go:179] * Verifying csi-hostpath-driver addon...
I1210 22:27:21.440934 9998 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1210 22:27:21.442360 9998 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1210 22:27:21.442771 9998 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1210 22:27:21.443585 9998 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1210 22:27:21.443601 9998 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1210 22:27:21.502730 9998 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1210 22:27:21.502761 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:21.510915 9998 api_server.go:279] https://192.168.39.89:8443/healthz returned 200:
ok
I1210 22:27:21.521564 9998 api_server.go:141] control plane version: v1.34.2
I1210 22:27:21.521625 9998 api_server.go:131] duration metric: took 82.706899ms to wait for apiserver health ...
I1210 22:27:21.521638 9998 system_pods.go:43] waiting for kube-system pods to appear ...
I1210 22:27:21.545701 9998 system_pods.go:59] 20 kube-system pods found
I1210 22:27:21.545736 9998 system_pods.go:61] "amd-gpu-device-plugin-t84vv" [49aaeb54-4c35-4927-8903-28c074178738] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1210 22:27:21.545743 9998 system_pods.go:61] "coredns-66bc5c9577-4w6v4" [65e6ede4-ca2c-4eb9-a3d1-a4209459a010] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1210 22:27:21.545750 9998 system_pods.go:61] "coredns-66bc5c9577-lh65b" [35786400-7e12-45f3-a524-9b2ecdf2a3c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1210 22:27:21.545756 9998 system_pods.go:61] "csi-hostpath-attacher-0" [d7766fe6-b121-4def-b39d-a4e8148d691f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1210 22:27:21.545761 9998 system_pods.go:61] "csi-hostpath-resizer-0" [a77816c2-7bdc-4799-8c3e-f5e522b532fb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1210 22:27:21.545769 9998 system_pods.go:61] "csi-hostpathplugin-4ktdr" [983cebd7-5378-4d08-bbde-53a7d16d5e75] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1210 22:27:21.545773 9998 system_pods.go:61] "etcd-addons-462156" [6b1c99f1-0ade-4885-b63a-5cb4b0f77c96] Running
I1210 22:27:21.545777 9998 system_pods.go:61] "kube-apiserver-addons-462156" [b596f37d-91a2-4b92-864c-dfa47885ddaf] Running
I1210 22:27:21.545780 9998 system_pods.go:61] "kube-controller-manager-addons-462156" [f944b071-7099-4e85-895e-04dc4be2254d] Running
I1210 22:27:21.545785 9998 system_pods.go:61] "kube-ingress-dns-minikube" [ebd516b6-c87a-40e2-a707-75ee9f2dfe60] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1210 22:27:21.545798 9998 system_pods.go:61] "kube-proxy-p4fsb" [7573193d-6d1a-4234-a12c-343613e99d1e] Running
I1210 22:27:21.545803 9998 system_pods.go:61] "kube-scheduler-addons-462156" [0ce509bc-4d77-42f4-8f26-b0bb89f9489a] Running
I1210 22:27:21.545807 9998 system_pods.go:61] "metrics-server-85b7d694d7-t4kn5" [72239687-ab58-4aee-b697-075933963bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1210 22:27:21.545814 9998 system_pods.go:61] "nvidia-device-plugin-daemonset-2knz8" [e3f636bc-8db9-4dc3-851a-f1331a2516e8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1210 22:27:21.545819 9998 system_pods.go:61] "registry-6b586f9694-hbcct" [f09be740-9c3b-4dc9-ae13-adfd16ccaec2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1210 22:27:21.545824 9998 system_pods.go:61] "registry-creds-764b6fb674-vz624" [a07caa13-412e-4ac4-a9a0-4ff42d41ed39] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1210 22:27:21.545829 9998 system_pods.go:61] "registry-proxy-bs796" [dd3cf5fe-024d-49ac-9781-1c16ce0767bd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1210 22:27:21.545834 9998 system_pods.go:61] "snapshot-controller-7d9fbc56b8-x7c9l" [2085f3db-d1b7-4f0b-8cc4-ee9d492ba05d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1210 22:27:21.545839 9998 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xgm5z" [27e6d8a8-39b6-461b-8a95-b5810cb5e347] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1210 22:27:21.545844 9998 system_pods.go:61] "storage-provisioner" [34acfc61-a61c-4021-9f68-bfd552138291] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1210 22:27:21.545850 9998 system_pods.go:74] duration metric: took 24.206939ms to wait for pod list to return data ...
I1210 22:27:21.545859 9998 default_sa.go:34] waiting for default service account to be created ...
I1210 22:27:21.554985 9998 default_sa.go:45] found service account: "default"
I1210 22:27:21.555009 9998 default_sa.go:55] duration metric: took 9.145333ms for default service account to be created ...
I1210 22:27:21.555019 9998 system_pods.go:116] waiting for k8s-apps to be running ...
I1210 22:27:21.555589 9998 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1210 22:27:21.555624 9998 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1210 22:27:21.592667 9998 system_pods.go:86] 20 kube-system pods found
I1210 22:27:21.592708 9998 system_pods.go:89] "amd-gpu-device-plugin-t84vv" [49aaeb54-4c35-4927-8903-28c074178738] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1210 22:27:21.592719 9998 system_pods.go:89] "coredns-66bc5c9577-4w6v4" [65e6ede4-ca2c-4eb9-a3d1-a4209459a010] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1210 22:27:21.592731 9998 system_pods.go:89] "coredns-66bc5c9577-lh65b" [35786400-7e12-45f3-a524-9b2ecdf2a3c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1210 22:27:21.592740 9998 system_pods.go:89] "csi-hostpath-attacher-0" [d7766fe6-b121-4def-b39d-a4e8148d691f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1210 22:27:21.592748 9998 system_pods.go:89] "csi-hostpath-resizer-0" [a77816c2-7bdc-4799-8c3e-f5e522b532fb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1210 22:27:21.592755 9998 system_pods.go:89] "csi-hostpathplugin-4ktdr" [983cebd7-5378-4d08-bbde-53a7d16d5e75] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1210 22:27:21.592759 9998 system_pods.go:89] "etcd-addons-462156" [6b1c99f1-0ade-4885-b63a-5cb4b0f77c96] Running
I1210 22:27:21.592766 9998 system_pods.go:89] "kube-apiserver-addons-462156" [b596f37d-91a2-4b92-864c-dfa47885ddaf] Running
I1210 22:27:21.592776 9998 system_pods.go:89] "kube-controller-manager-addons-462156" [f944b071-7099-4e85-895e-04dc4be2254d] Running
I1210 22:27:21.592784 9998 system_pods.go:89] "kube-ingress-dns-minikube" [ebd516b6-c87a-40e2-a707-75ee9f2dfe60] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1210 22:27:21.592799 9998 system_pods.go:89] "kube-proxy-p4fsb" [7573193d-6d1a-4234-a12c-343613e99d1e] Running
I1210 22:27:21.592807 9998 system_pods.go:89] "kube-scheduler-addons-462156" [0ce509bc-4d77-42f4-8f26-b0bb89f9489a] Running
I1210 22:27:21.592816 9998 system_pods.go:89] "metrics-server-85b7d694d7-t4kn5" [72239687-ab58-4aee-b697-075933963bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1210 22:27:21.592823 9998 system_pods.go:89] "nvidia-device-plugin-daemonset-2knz8" [e3f636bc-8db9-4dc3-851a-f1331a2516e8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1210 22:27:21.592836 9998 system_pods.go:89] "registry-6b586f9694-hbcct" [f09be740-9c3b-4dc9-ae13-adfd16ccaec2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1210 22:27:21.592841 9998 system_pods.go:89] "registry-creds-764b6fb674-vz624" [a07caa13-412e-4ac4-a9a0-4ff42d41ed39] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1210 22:27:21.592848 9998 system_pods.go:89] "registry-proxy-bs796" [dd3cf5fe-024d-49ac-9781-1c16ce0767bd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1210 22:27:21.592856 9998 system_pods.go:89] "snapshot-controller-7d9fbc56b8-x7c9l" [2085f3db-d1b7-4f0b-8cc4-ee9d492ba05d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1210 22:27:21.592869 9998 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xgm5z" [27e6d8a8-39b6-461b-8a95-b5810cb5e347] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1210 22:27:21.592878 9998 system_pods.go:89] "storage-provisioner" [34acfc61-a61c-4021-9f68-bfd552138291] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1210 22:27:21.592888 9998 system_pods.go:126] duration metric: took 37.863274ms to wait for k8s-apps to be running ...
I1210 22:27:21.592899 9998 system_svc.go:44] waiting for kubelet service to be running ....
I1210 22:27:21.592948 9998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1210 22:27:21.713677 9998 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1210 22:27:21.713709 9998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1210 22:27:21.797422 9998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1210 22:27:21.813346 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:21.814860 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:21.949792 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:22.312455 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:22.314640 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:22.450356 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:22.800636 9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.992556298s)
I1210 22:27:22.800713 9998 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.207738666s)
I1210 22:27:22.800736 9998 system_svc.go:56] duration metric: took 1.207835402s WaitForService to wait for kubelet
I1210 22:27:22.800751 9998 kubeadm.go:587] duration metric: took 11.560083814s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1210 22:27:22.800779 9998 node_conditions.go:102] verifying NodePressure condition ...
I1210 22:27:22.808415 9998 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1210 22:27:22.808457 9998 node_conditions.go:123] node cpu capacity is 2
I1210 22:27:22.808478 9998 node_conditions.go:105] duration metric: took 7.692838ms to run NodePressure ...
I1210 22:27:22.808500 9998 start.go:242] waiting for startup goroutines ...
I1210 22:27:22.811389 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:22.811857 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:22.963231 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:23.295847 9998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.498379256s)
I1210 22:27:23.296965 9998 addons.go:495] Verifying addon gcp-auth=true in "addons-462156"
I1210 22:27:23.299247 9998 out.go:179] * Verifying gcp-auth addon...
I1210 22:27:23.301799 9998 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1210 22:27:23.364109 9998 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1210 22:27:23.364131 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:23.364138 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:23.364265 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:23.454789 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:23.807963 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:23.808172 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:23.812002 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:23.953318 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:24.304218 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:24.306670 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:24.313852 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:24.447779 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:24.803655 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:24.804108 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:24.805871 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:24.946916 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:25.304318 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:25.305312 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:25.305362 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:25.447655 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:25.806088 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:25.811003 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:25.812360 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:25.953130 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:26.306891 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:26.308844 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:26.308880 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:26.446470 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:26.811259 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:26.813424 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:26.813508 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:26.948506 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:27.307099 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:27.309030 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:27.311138 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:27.449721 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:27.806833 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:27.806975 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:27.809246 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:27.948011 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:28.306558 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:28.306676 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:28.306860 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:28.449638 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:28.807961 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:28.807962 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:28.808030 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:28.947000 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:29.310172 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:29.310925 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:29.311472 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:29.447749 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:29.804614 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:29.804885 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:29.805180 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:29.947625 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:30.306941 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:30.307280 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:30.312022 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:30.446915 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:30.804070 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:30.804124 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:30.805363 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:30.947823 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:31.304407 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:31.304638 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:31.306046 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:31.447232 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:31.806932 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:31.807196 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:31.808261 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:31.949666 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:32.303317 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:32.305504 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:32.310394 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:32.447929 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:32.808448 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:32.808646 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:32.808805 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:32.949291 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:33.305087 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:33.305804 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:33.311416 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:33.447075 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:33.806763 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:33.806861 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:33.807549 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:33.949701 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:34.311842 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:34.311897 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:34.312124 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:34.447010 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:34.806561 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:34.808756 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:34.809118 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:34.947058 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:35.304794 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:35.304870 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:35.306239 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:35.447312 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:35.804966 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:35.804988 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:35.805848 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:35.946859 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:36.306276 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:36.306561 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:36.307054 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:36.446937 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:36.805017 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:36.805173 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:36.805917 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:36.947015 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:37.306956 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:37.307676 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:37.308565 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:37.449097 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:37.805067 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:37.805265 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:37.808098 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:37.951532 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:38.308675 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:38.308936 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:38.313481 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:38.448426 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:38.807182 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:38.808095 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:38.808532 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:38.947651 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:39.305612 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:39.306073 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:39.306235 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:39.446432 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:40.090959 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:40.091097 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:40.091112 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:40.091246 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:40.308383 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:40.308433 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:40.308610 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:40.446814 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:40.807524 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:40.807798 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:40.807892 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:40.947333 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:41.308033 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:41.308053 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:41.309490 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:41.448892 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:41.805904 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:41.806380 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:41.809476 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:41.946618 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:42.308673 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:42.313742 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:42.314160 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:42.448049 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:42.805431 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:42.805569 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:42.810523 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:42.948059 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:43.305965 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:43.308216 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:43.309734 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:43.447076 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:43.807719 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:43.807736 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:43.807978 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:43.948022 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:44.367104 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:44.367160 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:44.368842 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:44.510593 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:44.805367 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:44.805923 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:44.807985 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:44.950020 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:45.590418 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:45.590487 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:45.590609 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:45.590630 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:45.805521 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:45.806126 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:45.806552 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:45.948475 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:46.310010 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:46.311952 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:46.312373 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:46.450354 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:46.806664 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:46.808467 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:46.809001 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:46.948218 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:47.422945 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:47.427588 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:47.427770 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:47.447081 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:47.804919 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:47.805313 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:47.805604 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:47.948280 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:48.312902 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:48.313896 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:48.315724 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:48.449666 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:48.808247 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:48.810247 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:48.811321 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:48.951653 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:49.304762 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:49.306606 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:49.306638 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:49.447312 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:49.804928 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:49.805047 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:49.805049 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:49.947721 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:50.307976 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:50.310163 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:50.311400 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:50.447586 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:50.806428 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:50.806757 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:50.808267 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:50.947278 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:51.308493 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:51.308703 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:51.312865 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:51.447893 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:51.814171 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:51.818028 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:51.818054 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:51.947232 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:52.307670 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:52.308158 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:52.310044 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:52.447819 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:52.804248 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1210 22:27:52.804433 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:52.805932 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:52.946341 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:53.305748 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:53.306320 9998 kapi.go:107] duration metric: took 33.006167349s to wait for kubernetes.io/minikube-addons=registry ...
I1210 22:27:53.306495 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:53.446977 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:53.803578 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:53.805320 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:53.947510 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:54.305179 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:54.310565 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:54.451496 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:54.804753 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:54.807470 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:54.953732 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:55.305265 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:55.305543 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:55.446891 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:55.803563 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:55.805372 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:55.946992 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:56.303515 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:56.306258 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:56.447286 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:56.805377 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:56.806416 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:56.946842 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:57.303842 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:57.309927 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:57.448300 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:57.809727 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:57.810761 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:57.949479 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:58.307191 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:58.307243 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:58.448332 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:58.806722 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:58.807747 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:58.947243 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:59.304354 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:59.306143 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:59.451668 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:27:59.812922 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:27:59.814421 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:27:59.947297 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:00.307169 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:00.307810 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:00.488913 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:00.805269 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:00.805266 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:00.947150 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:01.305801 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:01.308156 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:01.451838 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:01.803618 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:01.807083 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:01.947762 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:02.307508 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:02.308599 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:02.447532 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:02.994386 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:02.995723 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:02.995745 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:03.309344 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:03.312309 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:03.448427 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:03.807219 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:03.812778 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:03.950026 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:04.311426 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:04.311519 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:04.451619 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:04.806909 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:04.809799 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:04.948262 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:05.309938 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:05.311173 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:05.447252 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:05.807571 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:05.808247 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:05.946940 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:06.306677 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:06.307057 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:06.448731 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:06.806741 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:06.807313 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:06.947374 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:07.304730 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:07.305672 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:07.447841 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:07.803411 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:07.805339 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:07.948407 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:08.305715 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:08.307586 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:08.450714 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:08.806219 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:08.808114 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:08.949751 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:09.308294 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:09.310203 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:09.448369 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:09.808292 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:09.812843 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:09.948658 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:10.305498 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:10.309532 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:10.602399 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:10.807962 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:10.809894 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:10.946268 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:11.307845 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:11.310322 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:11.446906 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:11.805365 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:11.809196 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:11.949245 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:12.315582 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:12.317773 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:12.447883 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:12.809055 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:12.809594 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:12.953133 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:13.314994 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:13.315543 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:13.449089 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:13.809665 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:13.810356 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:13.950469 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:14.308075 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:14.309148 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:14.447838 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:14.807370 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:14.818229 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:14.948993 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:15.303869 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:15.305319 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:15.451762 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:15.805509 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:15.805694 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:15.948234 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:16.306025 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:16.312004 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:16.454039 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:16.808537 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:16.808583 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:16.950393 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:17.306760 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:17.307987 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:17.448135 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:17.806353 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:17.808304 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:17.947236 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:18.310139 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:18.311301 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:18.451042 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:18.804628 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:18.805680 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:18.948158 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:19.305790 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:19.306285 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:19.455998 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:19.807133 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:19.808402 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:19.948043 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:20.305654 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:20.307596 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:20.446191 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:20.806074 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:20.808255 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:20.946822 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:21.303803 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:21.305273 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:21.447178 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:21.806134 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:21.808845 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:21.949416 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:22.308306 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:22.308571 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:22.448323 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:22.806701 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:22.807815 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:22.945875 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:23.308001 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:23.308120 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:23.447010 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:23.808596 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:23.808813 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:23.947080 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:24.307860 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:24.308129 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:24.448009 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1210 22:28:24.809143 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:24.812455 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:24.947342 9998 kapi.go:107] duration metric: took 1m3.504568389s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1210 22:28:25.307954 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:25.309056 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:25.804083 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:25.806691 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:26.304214 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:26.307914 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:26.809275 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:26.813377 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:27.308924 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:27.310827 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:27.811419 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:27.814269 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:28.305010 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:28.309055 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:28.807579 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:28.808569 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:29.358463 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:29.360598 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:29.805754 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:29.807057 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:30.306167 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:30.306825 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:30.807626 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:30.808893 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:31.305008 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:31.306579 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:31.805116 9998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1210 22:28:31.806055 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:32.306750 9998 kapi.go:107] duration metric: took 1m12.006912383s to wait for app.kubernetes.io/name=ingress-nginx ...
I1210 22:28:32.306802 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:32.805892 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:33.310478 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:33.804775 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:34.306828 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:34.805272 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:35.306495 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:35.805913 9998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1210 22:28:36.308868 9998 kapi.go:107] duration metric: took 1m13.007064708s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1210 22:28:36.310840 9998 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-462156 cluster.
I1210 22:28:36.312304 9998 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1210 22:28:36.313865 9998 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1210 22:28:36.315293 9998 out.go:179] * Enabled addons: cloud-spanner, registry-creds, storage-provisioner, inspektor-gadget, nvidia-device-plugin, amd-gpu-device-plugin, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I1210 22:28:36.316567 9998 addons.go:530] duration metric: took 1m25.0758813s for enable addons: enabled=[cloud-spanner registry-creds storage-provisioner inspektor-gadget nvidia-device-plugin amd-gpu-device-plugin ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I1210 22:28:36.316608 9998 start.go:247] waiting for cluster config update ...
I1210 22:28:36.316632 9998 start.go:256] writing updated cluster config ...
I1210 22:28:36.316919 9998 ssh_runner.go:195] Run: rm -f paused
I1210 22:28:36.324369 9998 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1210 22:28:36.409892 9998 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4w6v4" in "kube-system" namespace to be "Ready" or be gone ...
I1210 22:28:36.416205 9998 pod_ready.go:94] pod "coredns-66bc5c9577-4w6v4" is "Ready"
I1210 22:28:36.416245 9998 pod_ready.go:86] duration metric: took 6.324769ms for pod "coredns-66bc5c9577-4w6v4" in "kube-system" namespace to be "Ready" or be gone ...
I1210 22:28:36.418579 9998 pod_ready.go:83] waiting for pod "etcd-addons-462156" in "kube-system" namespace to be "Ready" or be gone ...
I1210 22:28:36.427344 9998 pod_ready.go:94] pod "etcd-addons-462156" is "Ready"
I1210 22:28:36.427368 9998 pod_ready.go:86] duration metric: took 8.767368ms for pod "etcd-addons-462156" in "kube-system" namespace to be "Ready" or be gone ...
I1210 22:28:36.432617 9998 pod_ready.go:83] waiting for pod "kube-apiserver-addons-462156" in "kube-system" namespace to be "Ready" or be gone ...
I1210 22:28:36.439042 9998 pod_ready.go:94] pod "kube-apiserver-addons-462156" is "Ready"
I1210 22:28:36.439066 9998 pod_ready.go:86] duration metric: took 6.427209ms for pod "kube-apiserver-addons-462156" in "kube-system" namespace to be "Ready" or be gone ...
I1210 22:28:36.444417 9998 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-462156" in "kube-system" namespace to be "Ready" or be gone ...
I1210 22:28:36.728875 9998 pod_ready.go:94] pod "kube-controller-manager-addons-462156" is "Ready"
I1210 22:28:36.728901 9998 pod_ready.go:86] duration metric: took 284.466578ms for pod "kube-controller-manager-addons-462156" in "kube-system" namespace to be "Ready" or be gone ...
I1210 22:28:36.928614 9998 pod_ready.go:83] waiting for pod "kube-proxy-p4fsb" in "kube-system" namespace to be "Ready" or be gone ...
I1210 22:28:37.328940 9998 pod_ready.go:94] pod "kube-proxy-p4fsb" is "Ready"
I1210 22:28:37.328963 9998 pod_ready.go:86] duration metric: took 400.313801ms for pod "kube-proxy-p4fsb" in "kube-system" namespace to be "Ready" or be gone ...
I1210 22:28:37.528860 9998 pod_ready.go:83] waiting for pod "kube-scheduler-addons-462156" in "kube-system" namespace to be "Ready" or be gone ...
I1210 22:28:37.929218 9998 pod_ready.go:94] pod "kube-scheduler-addons-462156" is "Ready"
I1210 22:28:37.929241 9998 pod_ready.go:86] duration metric: took 400.351455ms for pod "kube-scheduler-addons-462156" in "kube-system" namespace to be "Ready" or be gone ...
I1210 22:28:37.929251 9998 pod_ready.go:40] duration metric: took 1.604857077s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1210 22:28:37.978192 9998 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
I1210 22:28:37.979994 9998 out.go:179] * Done! kubectl is now configured to use "addons-462156" cluster and "default" namespace by default
==> CRI-O <==
Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.701491076Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cec88046-b786-4d0a-8fba-8c3beeca48d4 name=/runtime.v1.RuntimeService/ListContainers
Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.701617352Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cec88046-b786-4d0a-8fba-8c3beeca48d4 name=/runtime.v1.RuntimeService/ListContainers
Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.702006772Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b558daaacc029aa25cc796bed774b72efd790624e5a5eb1383d2b97c309562ea,PodSandboxId:2e03a0ab62889cdacc0a187105e5216e055dc246f043a2074ee25b869a6087bc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765405762742653064,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38540fef-532f-483f-9d53-b8ff5b9bcf5b,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:834fb0ab7b6faea49641985bcb2768772e0944979420ad46d3ca1e1849e35ec3,PodSandboxId:b20c392c9cca82b15edd7d626b6e7202f043a838a337cbad3dfec804e1de6794,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765405722244575883,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fad4956f-5563-487f-ab71-bb145da43547,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf0c21ab7b3ceda9377011aaac403431726321f7934c3a9f4981f1bf7cfe83e,PodSandboxId:be846878eebfd19267f592532e82e2397dbf525327fdc3fd2493752faebe326c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765405711006992699,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-rr58f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 72d41b3e-1e1f-4171-8195-d86b2e7c3285,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9a644cccdec8282db66a04fddd52c44529cdef25a775c060ec9d1e26a8b9b3a4,PodSandboxId:18528e7312776991dff57d727f058775a2a4400f1107cd4b5b3c65fa1bee8fa9,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765405696520924050,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-w8rwc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55912f2f-f973-48e6-871a-fd13b63514c4,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b29d29e7948342fff815c5cb12c3e41bb2776ee2f18e6749d5cda7a619de514,PodSandboxId:a066e40961becacdfb96f5f072ea4d9506f7c29aafd4e32985118c43ebe1cde9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765405694916501907,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-w5dlb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: aced33d0-e4e9-4718-9577-433f9aeb7d97,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1370f0b5c5b78b1ee6bd16a460504d25b8c4f5e057577657691cf3ea6fc2309,PodSandboxId:ca5ee6e3983d7a59d851db656bb89679bdd47bfcf189b96af6835249c511890e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765405668005885072,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd516b6-c87a-40e2-a707-75ee9f2dfe60,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:698a0a7d5083be5b6f12f498ec6941bfe4d800bcc73ff3529861720066cab23f,PodSandboxId:a10e70dd465a1c932ac585a10868626971dfbe854b007cebd96be9277e8922b9,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765405642250485067,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-t84vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49aaeb54-4c35-4927-8903-28c074178738,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed7661ceb3c5bda624d44999ef4385d224a490e97b78acc1a182006bd21c959b,PodSandboxId:782f9615fd7c0098c379fc3cbf273620ad95ae1066f773944104666dcedc8cfb,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765405640681515957,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34acfc61-a61c-4021-9f68-bfd552138291,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7188c2ead6e38325ac95ba892289d01348cf9c9c155fa6a7e65a29bc07232a88,PodSandboxId:7de6035cd917ec2d92eb275918a5cb16052b639c9ac8857abf294f378014aad1,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765405632910159710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4w6v4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65e6ede4-ca2c-4eb9-a3d1-a4209459a010,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b613868b0c2037100d4240def2a469aa4753caac82d15afc692242da9ed19a,PodSandboxId:cfe3f2b53d1c6e61538b4d56304bbc9f079b452faebef54026cfe3c209329ebd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765405631948407397,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4fsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7573193d-6d1a-4234-a12c-343613e99d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:865cd4374151004f1fd53eb21d2a5fc6ed8397dd0a3f446acf66d7d8321e5e0d,PodSandboxId:b43cba2a2d554214afaba89e44951ce25b318335a4a397376583c2813a80d78a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765405620448063306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47c0342be96a92673c2f5b0fb1b1cff2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e44079bd7e7a9d94b59112a7952c6c96c3ce5d9d069e8bc423adb650780aa03,PodSandboxId:48ebcb08bc37e596753e845029a35dc214e17777a19f8952bb713d2cd5415744,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765405620450539416,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbfb5a290986c3ba5f3632e753de9b5e,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae57e603fedc97c01344e449201c4da3c80dcedc1fa80b5dd388d358edf71cf,PodSandboxId:cbbc34c759b76b9dcaeb037f35e80a69f71faaeac9311910ae44734901e6d7b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765405620421874969,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e96f1f7b4c35a0a45ef34
c4272223c41,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802193904fcda7303eff4ed00463c2bcbb98e5b541274554f07799d55e0a38f0,PodSandboxId:c5876d12b7d7fd1371142b46cccca44e976df7c74bd2ea6b47bbcb78f0199842,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765405620403263065,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fe8e1ab4407bcf4ce945d6cc19196b5,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cec88046-b786-4d0a-8fba-8c3beeca48d4 name=/runtime.v1.RuntimeService/ListContainers
Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.719537086Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/1.0" file="docker/docker_client.go:631"
Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.741680559Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d998ff85-eaa9-414d-b434-6b5552e70d9b name=/runtime.v1.RuntimeService/Version
Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.741979967Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d998ff85-eaa9-414d-b434-6b5552e70d9b name=/runtime.v1.RuntimeService/Version
Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.743640285Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=678c4570-6859-4d0e-8d70-31a304c8e44b name=/runtime.v1.ImageService/ImageFsInfo
Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.745272322Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765405905745244008,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=678c4570-6859-4d0e-8d70-31a304c8e44b name=/runtime.v1.ImageService/ImageFsInfo
Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.746172976Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a488a87-91f0-402f-9855-8b1f3d00c13a name=/runtime.v1.RuntimeService/ListContainers
Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.746229691Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a488a87-91f0-402f-9855-8b1f3d00c13a name=/runtime.v1.RuntimeService/ListContainers
Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.746523328Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b558daaacc029aa25cc796bed774b72efd790624e5a5eb1383d2b97c309562ea,PodSandboxId:2e03a0ab62889cdacc0a187105e5216e055dc246f043a2074ee25b869a6087bc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765405762742653064,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38540fef-532f-483f-9d53-b8ff5b9bcf5b,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:834fb0ab7b6faea49641985bcb2768772e0944979420ad46d3ca1e1849e35ec3,PodSandboxId:b20c392c9cca82b15edd7d626b6e7202f043a838a337cbad3dfec804e1de6794,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765405722244575883,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fad4956f-5563-487f-ab71-bb145da43547,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf0c21ab7b3ceda9377011aaac403431726321f7934c3a9f4981f1bf7cfe83e,PodSandboxId:be846878eebfd19267f592532e82e2397dbf525327fdc3fd2493752faebe326c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765405711006992699,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-rr58f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 72d41b3e-1e1f-4171-8195-d86b2e7c3285,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9a644cccdec8282db66a04fddd52c44529cdef25a775c060ec9d1e26a8b9b3a4,PodSandboxId:18528e7312776991dff57d727f058775a2a4400f1107cd4b5b3c65fa1bee8fa9,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765405696520924050,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-w8rwc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55912f2f-f973-48e6-871a-fd13b63514c4,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b29d29e7948342fff815c5cb12c3e41bb2776ee2f18e6749d5cda7a619de514,PodSandboxId:a066e40961becacdfb96f5f072ea4d9506f7c29aafd4e32985118c43ebe1cde9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765405694916501907,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-w5dlb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: aced33d0-e4e9-4718-9577-433f9aeb7d97,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1370f0b5c5b78b1ee6bd16a460504d25b8c4f5e057577657691cf3ea6fc2309,PodSandboxId:ca5ee6e3983d7a59d851db656bb89679bdd47bfcf189b96af6835249c511890e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765405668005885072,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd516b6-c87a-40e2-a707-75ee9f2dfe60,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:698a0a7d5083be5b6f12f498ec6941bfe4d800bcc73ff3529861720066cab23f,PodSandboxId:a10e70dd465a1c932ac585a10868626971dfbe854b007cebd96be9277e8922b9,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765405642250485067,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-t84vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49aaeb54-4c35-4927-8903-28c074178738,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed7661ceb3c5bda624d44999ef4385d224a490e97b78acc1a182006bd21c959b,PodSandboxId:782f9615fd7c0098c379fc3cbf273620ad95ae1066f773944104666dcedc8cfb,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765405640681515957,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34acfc61-a61c-4021-9f68-bfd552138291,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7188c2ead6e38325ac95ba892289d01348cf9c9c155fa6a7e65a29bc07232a88,PodSandboxId:7de6035cd917ec2d92eb275918a5cb16052b639c9ac8857abf294f378014aad1,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765405632910159710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4w6v4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65e6ede4-ca2c-4eb9-a3d1-a4209459a010,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b613868b0c2037100d4240def2a469aa4753caac82d15afc692242da9ed19a,PodSandboxId:cfe3f2b53d1c6e61538b4d56304bbc9f079b452faebef54026cfe3c209329ebd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765405631948407397,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4fsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7573193d-6d1a-4234-a12c-343613e99d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:865cd4374151004f1fd53eb21d2a5fc6ed8397dd0a3f446acf66d7d8321e5e0d,PodSandboxId:b43cba2a2d554214afaba89e44951ce25b318335a4a397376583c2813a80d78a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765405620448063306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47c0342be96a92673c2f5b0fb1b1cff2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e44079bd7e7a9d94b59112a7952c6c96c3ce5d9d069e8bc423adb650780aa03,PodSandboxId:48ebcb08bc37e596753e845029a35dc214e17777a19f8952bb713d2cd5415744,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765405620450539416,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbfb5a290986c3ba5f3632e753de9b5e,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae57e603fedc97c01344e449201c4da3c80dcedc1fa80b5dd388d358edf71cf,PodSandboxId:cbbc34c759b76b9dcaeb037f35e80a69f71faaeac9311910ae44734901e6d7b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765405620421874969,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e96f1f7b4c35a0a45ef34
c4272223c41,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802193904fcda7303eff4ed00463c2bcbb98e5b541274554f07799d55e0a38f0,PodSandboxId:c5876d12b7d7fd1371142b46cccca44e976df7c74bd2ea6b47bbcb78f0199842,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765405620403263065,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fe8e1ab4407bcf4ce945d6cc19196b5,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a488a87-91f0-402f-9855-8b1f3d00c13a name=/runtime.v1.RuntimeService/ListContainers
Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.778688513Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ac178b67-1ded-45d8-b113-5e1b49e946c2 name=/runtime.v1.RuntimeService/Version
Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.778866528Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ac178b67-1ded-45d8-b113-5e1b49e946c2 name=/runtime.v1.RuntimeService/Version
Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.780034972Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=83cab9c8-e543-45de-b027-3aadb774d5b0 name=/runtime.v1.ImageService/ImageFsInfo
Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.781468557Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765405905781438966,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=83cab9c8-e543-45de-b027-3aadb774d5b0 name=/runtime.v1.ImageService/ImageFsInfo
Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.782479173Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4197aa2-e599-441d-a608-294e1705ce3c name=/runtime.v1.RuntimeService/ListContainers
Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.782541454Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4197aa2-e599-441d-a608-294e1705ce3c name=/runtime.v1.RuntimeService/ListContainers
Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.782930164Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b558daaacc029aa25cc796bed774b72efd790624e5a5eb1383d2b97c309562ea,PodSandboxId:2e03a0ab62889cdacc0a187105e5216e055dc246f043a2074ee25b869a6087bc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765405762742653064,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38540fef-532f-483f-9d53-b8ff5b9bcf5b,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:834fb0ab7b6faea49641985bcb2768772e0944979420ad46d3ca1e1849e35ec3,PodSandboxId:b20c392c9cca82b15edd7d626b6e7202f043a838a337cbad3dfec804e1de6794,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765405722244575883,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fad4956f-5563-487f-ab71-bb145da43547,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf0c21ab7b3ceda9377011aaac403431726321f7934c3a9f4981f1bf7cfe83e,PodSandboxId:be846878eebfd19267f592532e82e2397dbf525327fdc3fd2493752faebe326c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765405711006992699,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-rr58f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 72d41b3e-1e1f-4171-8195-d86b2e7c3285,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9a644cccdec8282db66a04fddd52c44529cdef25a775c060ec9d1e26a8b9b3a4,PodSandboxId:18528e7312776991dff57d727f058775a2a4400f1107cd4b5b3c65fa1bee8fa9,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765405696520924050,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-w8rwc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55912f2f-f973-48e6-871a-fd13b63514c4,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b29d29e7948342fff815c5cb12c3e41bb2776ee2f18e6749d5cda7a619de514,PodSandboxId:a066e40961becacdfb96f5f072ea4d9506f7c29aafd4e32985118c43ebe1cde9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765405694916501907,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-w5dlb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: aced33d0-e4e9-4718-9577-433f9aeb7d97,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1370f0b5c5b78b1ee6bd16a460504d25b8c4f5e057577657691cf3ea6fc2309,PodSandboxId:ca5ee6e3983d7a59d851db656bb89679bdd47bfcf189b96af6835249c511890e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765405668005885072,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd516b6-c87a-40e2-a707-75ee9f2dfe60,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:698a0a7d5083be5b6f12f498ec6941bfe4d800bcc73ff3529861720066cab23f,PodSandboxId:a10e70dd465a1c932ac585a10868626971dfbe854b007cebd96be9277e8922b9,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765405642250485067,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-t84vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49aaeb54-4c35-4927-8903-28c074178738,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed7661ceb3c5bda624d44999ef4385d224a490e97b78acc1a182006bd21c959b,PodSandboxId:782f9615fd7c0098c379fc3cbf273620ad95ae1066f773944104666dcedc8cfb,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765405640681515957,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34acfc61-a61c-4021-9f68-bfd552138291,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7188c2ead6e38325ac95ba892289d01348cf9c9c155fa6a7e65a29bc07232a88,PodSandboxId:7de6035cd917ec2d92eb275918a5cb16052b639c9ac8857abf294f378014aad1,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765405632910159710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4w6v4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65e6ede4-ca2c-4eb9-a3d1-a4209459a010,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b613868b0c2037100d4240def2a469aa4753caac82d15afc692242da9ed19a,PodSandboxId:cfe3f2b53d1c6e61538b4d56304bbc9f079b452faebef54026cfe3c209329ebd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765405631948407397,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4fsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7573193d-6d1a-4234-a12c-343613e99d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:865cd4374151004f1fd53eb21d2a5fc6ed8397dd0a3f446acf66d7d8321e5e0d,PodSandboxId:b43cba2a2d554214afaba89e44951ce25b318335a4a397376583c2813a80d78a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765405620448063306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47c0342be96a92673c2f5b0fb1b1cff2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e44079bd7e7a9d94b59112a7952c6c96c3ce5d9d069e8bc423adb650780aa03,PodSandboxId:48ebcb08bc37e596753e845029a35dc214e17777a19f8952bb713d2cd5415744,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765405620450539416,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbfb5a290986c3ba5f3632e753de9b5e,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae57e603fedc97c01344e449201c4da3c80dcedc1fa80b5dd388d358edf71cf,PodSandboxId:cbbc34c759b76b9dcaeb037f35e80a69f71faaeac9311910ae44734901e6d7b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765405620421874969,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e96f1f7b4c35a0a45ef34
c4272223c41,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802193904fcda7303eff4ed00463c2bcbb98e5b541274554f07799d55e0a38f0,PodSandboxId:c5876d12b7d7fd1371142b46cccca44e976df7c74bd2ea6b47bbcb78f0199842,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765405620403263065,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fe8e1ab4407bcf4ce945d6cc19196b5,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4197aa2-e599-441d-a608-294e1705ce3c name=/runtime.v1.RuntimeService/ListContainers
Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.816550452Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=62089880-f95e-40a0-b1f6-a2e708970eae name=/runtime.v1.RuntimeService/Version
Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.816628196Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=62089880-f95e-40a0-b1f6-a2e708970eae name=/runtime.v1.RuntimeService/Version
Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.818313533Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f428fae7-c8a7-4b3a-8969-d964d2aa6e59 name=/runtime.v1.ImageService/ImageFsInfo
Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.819565678Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765405905819536493,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f428fae7-c8a7-4b3a-8969-d964d2aa6e59 name=/runtime.v1.ImageService/ImageFsInfo
Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.820628008Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0173e7f0-4c77-4949-815b-5d20a8f0e029 name=/runtime.v1.RuntimeService/ListContainers
Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.820695944Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0173e7f0-4c77-4949-815b-5d20a8f0e029 name=/runtime.v1.RuntimeService/ListContainers
Dec 10 22:31:45 addons-462156 crio[819]: time="2025-12-10 22:31:45.821075273Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b558daaacc029aa25cc796bed774b72efd790624e5a5eb1383d2b97c309562ea,PodSandboxId:2e03a0ab62889cdacc0a187105e5216e055dc246f043a2074ee25b869a6087bc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765405762742653064,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38540fef-532f-483f-9d53-b8ff5b9bcf5b,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:834fb0ab7b6faea49641985bcb2768772e0944979420ad46d3ca1e1849e35ec3,PodSandboxId:b20c392c9cca82b15edd7d626b6e7202f043a838a337cbad3dfec804e1de6794,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765405722244575883,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fad4956f-5563-487f-ab71-bb145da43547,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf0c21ab7b3ceda9377011aaac403431726321f7934c3a9f4981f1bf7cfe83e,PodSandboxId:be846878eebfd19267f592532e82e2397dbf525327fdc3fd2493752faebe326c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765405711006992699,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-rr58f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 72d41b3e-1e1f-4171-8195-d86b2e7c3285,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9a644cccdec8282db66a04fddd52c44529cdef25a775c060ec9d1e26a8b9b3a4,PodSandboxId:18528e7312776991dff57d727f058775a2a4400f1107cd4b5b3c65fa1bee8fa9,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765405696520924050,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-w8rwc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55912f2f-f973-48e6-871a-fd13b63514c4,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b29d29e7948342fff815c5cb12c3e41bb2776ee2f18e6749d5cda7a619de514,PodSandboxId:a066e40961becacdfb96f5f072ea4d9506f7c29aafd4e32985118c43ebe1cde9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765405694916501907,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-w5dlb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: aced33d0-e4e9-4718-9577-433f9aeb7d97,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1370f0b5c5b78b1ee6bd16a460504d25b8c4f5e057577657691cf3ea6fc2309,PodSandboxId:ca5ee6e3983d7a59d851db656bb89679bdd47bfcf189b96af6835249c511890e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765405668005885072,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd516b6-c87a-40e2-a707-75ee9f2dfe60,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:698a0a7d5083be5b6f12f498ec6941bfe4d800bcc73ff3529861720066cab23f,PodSandboxId:a10e70dd465a1c932ac585a10868626971dfbe854b007cebd96be9277e8922b9,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765405642250485067,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-t84vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49aaeb54-4c35-4927-8903-28c074178738,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed7661ceb3c5bda624d44999ef4385d224a490e97b78acc1a182006bd21c959b,PodSandboxId:782f9615fd7c0098c379fc3cbf273620ad95ae1066f773944104666dcedc8cfb,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765405640681515957,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34acfc61-a61c-4021-9f68-bfd552138291,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7188c2ead6e38325ac95ba892289d01348cf9c9c155fa6a7e65a29bc07232a88,PodSandboxId:7de6035cd917ec2d92eb275918a5cb16052b639c9ac8857abf294f378014aad1,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765405632910159710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4w6v4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65e6ede4-ca2c-4eb9-a3d1-a4209459a010,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b613868b0c2037100d4240def2a469aa4753caac82d15afc692242da9ed19a,PodSandboxId:cfe3f2b53d1c6e61538b4d56304bbc9f079b452faebef54026cfe3c209329ebd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765405631948407397,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4fsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7573193d-6d1a-4234-a12c-343613e99d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:865cd4374151004f1fd53eb21d2a5fc6ed8397dd0a3f446acf66d7d8321e5e0d,PodSandboxId:b43cba2a2d554214afaba89e44951ce25b318335a4a397376583c2813a80d78a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765405620448063306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47c0342be96a92673c2f5b0fb1b1cff2,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e44079bd7e7a9d94b59112a7952c6c96c3ce5d9d069e8bc423adb650780aa03,PodSandboxId:48ebcb08bc37e596753e845029a35dc214e17777a19f8952bb713d2cd5415744,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765405620450539416,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbfb5a290986c3ba5f3632e753de9b5e,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ae57e603fedc97c01344e449201c4da3c80dcedc1fa80b5dd388d358edf71cf,PodSandboxId:cbbc34c759b76b9dcaeb037f35e80a69f71faaeac9311910ae44734901e6d7b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765405620421874969,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e96f1f7b4c35a0a45ef34
c4272223c41,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802193904fcda7303eff4ed00463c2bcbb98e5b541274554f07799d55e0a38f0,PodSandboxId:c5876d12b7d7fd1371142b46cccca44e976df7c74bd2ea6b47bbcb78f0199842,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765405620403263065,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-addons-462156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fe8e1ab4407bcf4ce945d6cc19196b5,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0173e7f0-4c77-4949-815b-5d20a8f0e029 name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
b558daaacc029 public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff 2 minutes ago Running nginx 0 2e03a0ab62889 nginx default
834fb0ab7b6fa gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 3 minutes ago Running busybox 0 b20c392c9cca8 busybox default
ccf0c21ab7b3c registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad 3 minutes ago Running controller 0 be846878eebfd ingress-nginx-controller-85d4c799dd-rr58f ingress-nginx
9a644cccdec82 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285 3 minutes ago Exited patch 0 18528e7312776 ingress-nginx-admission-patch-w8rwc ingress-nginx
6b29d29e79483 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285 3 minutes ago Exited create 0 a066e40961bec ingress-nginx-admission-create-w5dlb ingress-nginx
c1370f0b5c5b7 docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 3 minutes ago Running minikube-ingress-dns 0 ca5ee6e3983d7 kube-ingress-dns-minikube kube-system
698a0a7d5083b docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 a10e70dd465a1 amd-gpu-device-plugin-t84vv kube-system
ed7661ceb3c5b 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 782f9615fd7c0 storage-provisioner kube-system
7188c2ead6e38 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969 4 minutes ago Running coredns 0 7de6035cd917e coredns-66bc5c9577-4w6v4 kube-system
87b613868b0c2 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45 4 minutes ago Running kube-proxy 0 cfe3f2b53d1c6 kube-proxy-p4fsb kube-system
5e44079bd7e7a 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952 4 minutes ago Running kube-scheduler 0 48ebcb08bc37e kube-scheduler-addons-462156 kube-system
865cd43741510 a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1 4 minutes ago Running etcd 0 b43cba2a2d554 etcd-addons-462156 kube-system
3ae57e603fedc 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8 4 minutes ago Running kube-controller-manager 0 cbbc34c759b76 kube-controller-manager-addons-462156 kube-system
802193904fcda a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85 4 minutes ago Running kube-apiserver 0 c5876d12b7d7f kube-apiserver-addons-462156 kube-system
==> coredns [7188c2ead6e38325ac95ba892289d01348cf9c9c155fa6a7e65a29bc07232a88] <==
linux/amd64, go1.24.1, 707c7c1
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
[INFO] Reloading complete
[INFO] 127.0.0.1:51623 - 5125 "HINFO IN 4691920241162746704.8558605234871798027. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026712188s
[INFO] 10.244.0.23:44049 - 60332 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000211225s
[INFO] 10.244.0.23:35855 - 59323 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0008102s
[INFO] 10.244.0.23:47734 - 10675 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000186952s
[INFO] 10.244.0.23:49750 - 26562 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000094608s
[INFO] 10.244.0.23:54807 - 759 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00007952s
[INFO] 10.244.0.23:53625 - 36565 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000072255s
[INFO] 10.244.0.23:36915 - 29260 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001812467s
[INFO] 10.244.0.23:42340 - 21935 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.001366896s
[INFO] 10.244.0.27:44289 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001642496s
[INFO] 10.244.0.27:46516 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00014987s
==> describe nodes <==
Name: addons-462156
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-462156
kubernetes.io/os=linux
minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
minikube.k8s.io/name=addons-462156
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_10T22_27_07_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-462156
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 10 Dec 2025 22:27:03 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-462156
AcquireTime: <unset>
RenewTime: Wed, 10 Dec 2025 22:31:42 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 10 Dec 2025 22:29:39 +0000 Wed, 10 Dec 2025 22:27:01 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 10 Dec 2025 22:29:39 +0000 Wed, 10 Dec 2025 22:27:01 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 10 Dec 2025 22:29:39 +0000 Wed, 10 Dec 2025 22:27:01 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 10 Dec 2025 22:29:39 +0000 Wed, 10 Dec 2025 22:27:07 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.89
Hostname: addons-462156
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
System Info:
Machine ID: 04673162af0d46ce874ca95dda098d35
System UUID: 04673162-af0d-46ce-874c-a95dda098d35
Boot ID: 7f940656-edd7-4642-87ac-d557629f3ef4
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.34.2
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (13 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m8s
default hello-world-app-5d498dc89-p9mq7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m29s
ingress-nginx ingress-nginx-controller-85d4c799dd-rr58f 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m27s
kube-system amd-gpu-device-plugin-t84vv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m32s
kube-system coredns-66bc5c9577-4w6v4 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m35s
kube-system etcd-addons-462156 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m42s
kube-system kube-apiserver-addons-462156 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m40s
kube-system kube-controller-manager-addons-462156 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m40s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m29s
kube-system kube-proxy-p4fsb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m35s
kube-system kube-scheduler-addons-462156 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m40s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m29s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m33s kube-proxy
Normal Starting 4m47s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m47s (x8 over 4m47s) kubelet Node addons-462156 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m47s (x8 over 4m47s) kubelet Node addons-462156 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m47s (x7 over 4m47s) kubelet Node addons-462156 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m47s kubelet Updated Node Allocatable limit across pods
Normal Starting 4m40s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m40s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m40s kubelet Node addons-462156 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m40s kubelet Node addons-462156 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m40s kubelet Node addons-462156 status is now: NodeHasSufficientPID
Normal NodeReady 4m39s kubelet Node addons-462156 status is now: NodeReady
Normal RegisteredNode 4m36s node-controller Node addons-462156 event: Registered Node addons-462156 in Controller
==> dmesg <==
[ +0.000014] kauditd_printk_skb: 276 callbacks suppressed
[ +3.561449] kauditd_printk_skb: 407 callbacks suppressed
[ +5.796427] kauditd_printk_skb: 5 callbacks suppressed
[ +12.966961] kauditd_printk_skb: 32 callbacks suppressed
[ +6.873850] kauditd_printk_skb: 26 callbacks suppressed
[Dec10 22:28] kauditd_printk_skb: 5 callbacks suppressed
[ +6.136584] kauditd_printk_skb: 53 callbacks suppressed
[ +5.096414] kauditd_printk_skb: 20 callbacks suppressed
[ +1.135139] kauditd_printk_skb: 200 callbacks suppressed
[ +3.000501] kauditd_printk_skb: 113 callbacks suppressed
[ +0.000181] kauditd_printk_skb: 59 callbacks suppressed
[ +5.506434] kauditd_printk_skb: 53 callbacks suppressed
[ +3.328725] kauditd_printk_skb: 47 callbacks suppressed
[ +10.734521] kauditd_printk_skb: 17 callbacks suppressed
[ +5.919858] kauditd_printk_skb: 22 callbacks suppressed
[Dec10 22:29] kauditd_printk_skb: 38 callbacks suppressed
[ +0.000193] kauditd_printk_skb: 108 callbacks suppressed
[ +0.559663] kauditd_printk_skb: 167 callbacks suppressed
[ +1.027041] kauditd_printk_skb: 181 callbacks suppressed
[ +6.213864] kauditd_printk_skb: 101 callbacks suppressed
[ +5.000933] kauditd_printk_skb: 16 callbacks suppressed
[ +0.000556] kauditd_printk_skb: 16 callbacks suppressed
[ +0.863509] kauditd_printk_skb: 59 callbacks suppressed
[ +0.703385] kauditd_printk_skb: 48 callbacks suppressed
[Dec10 22:31] kauditd_printk_skb: 71 callbacks suppressed
==> etcd [865cd4374151004f1fd53eb21d2a5fc6ed8397dd0a3f446acf66d7d8321e5e0d] <==
{"level":"info","ts":"2025-12-10T22:28:14.775406Z","caller":"traceutil/trace.go:172","msg":"trace[764113813] linearizableReadLoop","detail":"{readStateIndex:1098; appliedIndex:1098; }","duration":"118.256612ms","start":"2025-12-10T22:28:14.657134Z","end":"2025-12-10T22:28:14.775390Z","steps":["trace[764113813] 'read index received' (duration: 118.250897ms)","trace[764113813] 'applied index is now lower than readState.Index' (duration: 5.03µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-10T22:28:14.775532Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.398353ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-10T22:28:14.775549Z","caller":"traceutil/trace.go:172","msg":"trace[1391575428] range","detail":"{range_begin:/registry/roles; range_end:; response_count:0; response_revision:1071; }","duration":"118.433453ms","start":"2025-12-10T22:28:14.657111Z","end":"2025-12-10T22:28:14.775544Z","steps":["trace[1391575428] 'agreement among raft nodes before linearized reading' (duration: 118.365741ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-10T22:28:14.776173Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"116.717768ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/servicecidrs\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-10T22:28:14.776353Z","caller":"traceutil/trace.go:172","msg":"trace[1073129533] range","detail":"{range_begin:/registry/servicecidrs; range_end:; response_count:0; response_revision:1072; }","duration":"116.968163ms","start":"2025-12-10T22:28:14.659374Z","end":"2025-12-10T22:28:14.776342Z","steps":["trace[1073129533] 'agreement among raft nodes before linearized reading' (duration: 116.553652ms)"],"step_count":1}
{"level":"info","ts":"2025-12-10T22:28:14.776440Z","caller":"traceutil/trace.go:172","msg":"trace[1707107323] transaction","detail":"{read_only:false; response_revision:1072; number_of_response:1; }","duration":"192.964005ms","start":"2025-12-10T22:28:14.583465Z","end":"2025-12-10T22:28:14.776429Z","steps":["trace[1707107323] 'process raft request' (duration: 192.254518ms)"],"step_count":1}
{"level":"info","ts":"2025-12-10T22:28:19.711794Z","caller":"traceutil/trace.go:172","msg":"trace[1925148551] transaction","detail":"{read_only:false; response_revision:1123; number_of_response:1; }","duration":"103.235058ms","start":"2025-12-10T22:28:19.608495Z","end":"2025-12-10T22:28:19.711730Z","steps":["trace[1925148551] 'process raft request' (duration: 100.63493ms)"],"step_count":1}
{"level":"info","ts":"2025-12-10T22:28:29.352491Z","caller":"traceutil/trace.go:172","msg":"trace[1288898758] transaction","detail":"{read_only:false; response_revision:1171; number_of_response:1; }","duration":"156.998921ms","start":"2025-12-10T22:28:29.195479Z","end":"2025-12-10T22:28:29.352477Z","steps":["trace[1288898758] 'process raft request' (duration: 156.913639ms)"],"step_count":1}
{"level":"info","ts":"2025-12-10T22:28:30.537168Z","caller":"traceutil/trace.go:172","msg":"trace[1990854520] linearizableReadLoop","detail":"{readStateIndex:1202; appliedIndex:1202; }","duration":"100.222766ms","start":"2025-12-10T22:28:30.436930Z","end":"2025-12-10T22:28:30.537153Z","steps":["trace[1990854520] 'read index received' (duration: 100.217713ms)","trace[1990854520] 'applied index is now lower than readState.Index' (duration: 4.475µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-10T22:28:30.537314Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.348556ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-10T22:28:30.537335Z","caller":"traceutil/trace.go:172","msg":"trace[1130245531] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1172; }","duration":"100.405173ms","start":"2025-12-10T22:28:30.436924Z","end":"2025-12-10T22:28:30.537329Z","steps":["trace[1130245531] 'agreement among raft nodes before linearized reading' (duration: 100.324017ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-10T22:28:30.729397Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"191.811976ms","expected-duration":"100ms","prefix":"","request":"header:<ID:416471959417307128 > lease_revoke:<id:05c79b0a5ffabeb3>","response":"size:28"}
{"level":"info","ts":"2025-12-10T22:28:30.730840Z","caller":"traceutil/trace.go:172","msg":"trace[597849462] linearizableReadLoop","detail":"{readStateIndex:1203; appliedIndex:1202; }","duration":"121.91333ms","start":"2025-12-10T22:28:30.608913Z","end":"2025-12-10T22:28:30.730826Z","steps":["trace[597849462] 'read index received' (duration: 26.151µs)","trace[597849462] 'applied index is now lower than readState.Index' (duration: 121.885785ms)"],"step_count":2}
{"level":"warn","ts":"2025-12-10T22:28:30.730948Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.023959ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" limit:1 ","response":"range_response_count:1 size:554"}
{"level":"info","ts":"2025-12-10T22:28:30.730966Z","caller":"traceutil/trace.go:172","msg":"trace[1412954282] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1172; }","duration":"122.051324ms","start":"2025-12-10T22:28:30.608909Z","end":"2025-12-10T22:28:30.730961Z","steps":["trace[1412954282] 'agreement among raft nodes before linearized reading' (duration: 121.958021ms)"],"step_count":1}
{"level":"info","ts":"2025-12-10T22:29:04.550990Z","caller":"traceutil/trace.go:172","msg":"trace[45574879] linearizableReadLoop","detail":"{readStateIndex:1398; appliedIndex:1398; }","duration":"193.90097ms","start":"2025-12-10T22:29:04.357071Z","end":"2025-12-10T22:29:04.550972Z","steps":["trace[45574879] 'read index received' (duration: 193.876073ms)","trace[45574879] 'applied index is now lower than readState.Index' (duration: 24.221µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-10T22:29:04.551187Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"194.070927ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-10T22:29:04.551229Z","caller":"traceutil/trace.go:172","msg":"trace[1529683260] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1360; }","duration":"194.159357ms","start":"2025-12-10T22:29:04.357062Z","end":"2025-12-10T22:29:04.551221Z","steps":["trace[1529683260] 'agreement among raft nodes before linearized reading' (duration: 194.043971ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-10T22:29:04.551574Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"187.531367ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-10T22:29:04.551625Z","caller":"traceutil/trace.go:172","msg":"trace[1081372739] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1360; }","duration":"187.586994ms","start":"2025-12-10T22:29:04.364031Z","end":"2025-12-10T22:29:04.551618Z","steps":["trace[1081372739] 'agreement among raft nodes before linearized reading' (duration: 187.517888ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-10T22:29:04.551987Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.373645ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-10T22:29:04.552027Z","caller":"traceutil/trace.go:172","msg":"trace[1205609276] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1360; }","duration":"114.416402ms","start":"2025-12-10T22:29:04.437604Z","end":"2025-12-10T22:29:04.552021Z","steps":["trace[1205609276] 'agreement among raft nodes before linearized reading' (duration: 114.354435ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-10T22:29:04.554575Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"190.188545ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-10T22:29:04.554619Z","caller":"traceutil/trace.go:172","msg":"trace[2123995129] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1360; }","duration":"190.235872ms","start":"2025-12-10T22:29:04.364376Z","end":"2025-12-10T22:29:04.554612Z","steps":["trace[2123995129] 'agreement among raft nodes before linearized reading' (duration: 190.164834ms)"],"step_count":1}
{"level":"info","ts":"2025-12-10T22:29:27.999643Z","caller":"traceutil/trace.go:172","msg":"trace[2094646558] transaction","detail":"{read_only:false; response_revision:1633; number_of_response:1; }","duration":"121.251986ms","start":"2025-12-10T22:29:27.878378Z","end":"2025-12-10T22:29:27.999630Z","steps":["trace[2094646558] 'process raft request' (duration: 121.163244ms)"],"step_count":1}
==> kernel <==
22:31:46 up 5 min, 0 users, load average: 0.66, 1.53, 0.80
Linux addons-462156 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec 8 03:04:10 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [802193904fcda7303eff4ed00463c2bcbb98e5b541274554f07799d55e0a38f0] <==
E1210 22:28:05.132620 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.162.166:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.162.166:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.162.166:443: connect: connection refused" logger="UnhandledError"
E1210 22:28:05.153888 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.162.166:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.162.166:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.162.166:443: connect: connection refused" logger="UnhandledError"
I1210 22:28:05.261517 1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E1210 22:28:49.784508 1 conn.go:339] Error on socket receive: read tcp 192.168.39.89:8443->192.168.39.1:50570: use of closed network connection
E1210 22:28:49.972528 1 conn.go:339] Error on socket receive: read tcp 192.168.39.89:8443->192.168.39.1:50594: use of closed network connection
I1210 22:28:59.132895 1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.153.104"}
I1210 22:29:06.162169 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I1210 22:29:17.761093 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
I1210 22:29:17.997415 1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.194.0"}
I1210 22:29:30.064040 1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
E1210 22:29:36.530795 1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
I1210 22:29:56.208593 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1210 22:29:56.208723 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1210 22:29:56.243987 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1210 22:29:56.244129 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1210 22:29:56.249426 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1210 22:29:56.249472 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1210 22:29:56.267652 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1210 22:29:56.267682 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1210 22:29:56.286011 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1210 22:29:56.286080 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W1210 22:29:57.250239 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W1210 22:29:57.286691 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W1210 22:29:57.337317 1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
I1210 22:31:44.734841 1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.94.71"}
==> kube-controller-manager [3ae57e603fedc97c01344e449201c4da3c80dcedc1fa80b5dd388d358edf71cf] <==
E1210 22:30:05.873463 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
I1210 22:30:10.438956 1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
I1210 22:30:10.439061 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1210 22:30:10.496604 1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
I1210 22:30:10.496674 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
E1210 22:30:12.338591 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1210 22:30:12.339975 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1210 22:30:14.080188 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1210 22:30:14.081495 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1210 22:30:14.988961 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1210 22:30:14.989956 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1210 22:30:30.433682 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1210 22:30:30.435010 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1210 22:30:31.484933 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1210 22:30:31.486040 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1210 22:30:37.811499 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1210 22:30:37.812670 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1210 22:31:06.641958 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1210 22:31:06.643236 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1210 22:31:11.128949 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1210 22:31:11.130871 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1210 22:31:26.952280 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1210 22:31:26.953340 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1210 22:31:40.540134 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1210 22:31:40.541175 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
==> kube-proxy [87b613868b0c2037100d4240def2a469aa4753caac82d15afc692242da9ed19a] <==
I1210 22:27:12.481921 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1210 22:27:12.584872 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1210 22:27:12.584911 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.89"]
E1210 22:27:12.585017 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1210 22:27:12.808504 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1210 22:27:12.808556 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1210 22:27:12.808581 1 server_linux.go:132] "Using iptables Proxier"
I1210 22:27:12.827642 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1210 22:27:12.829050 1 server.go:527] "Version info" version="v1.34.2"
I1210 22:27:12.829078 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1210 22:27:12.855032 1 config.go:200] "Starting service config controller"
I1210 22:27:12.855045 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1210 22:27:12.855087 1 config.go:106] "Starting endpoint slice config controller"
I1210 22:27:12.855093 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1210 22:27:12.855120 1 config.go:403] "Starting serviceCIDR config controller"
I1210 22:27:12.855123 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1210 22:27:12.859595 1 config.go:309] "Starting node config controller"
I1210 22:27:12.859610 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1210 22:27:12.859617 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1210 22:27:12.956742 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1210 22:27:12.956807 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1210 22:27:12.956861 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-scheduler [5e44079bd7e7a9d94b59112a7952c6c96c3ce5d9d069e8bc423adb650780aa03] <==
E1210 22:27:03.308201 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1210 22:27:03.308246 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1210 22:27:03.308293 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1210 22:27:03.308348 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1210 22:27:03.308395 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1210 22:27:03.308429 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1210 22:27:03.308475 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1210 22:27:03.308539 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1210 22:27:03.309871 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1210 22:27:04.113393 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1210 22:27:04.162218 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1210 22:27:04.213871 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1210 22:27:04.238652 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1210 22:27:04.305859 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1210 22:27:04.311330 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1210 22:27:04.339576 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1210 22:27:04.376973 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1210 22:27:04.418431 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1210 22:27:04.419910 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1210 22:27:04.469696 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1210 22:27:04.509964 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1210 22:27:04.602644 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1210 22:27:04.646729 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1210 22:27:04.715434 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
I1210 22:27:06.794132 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Dec 10 22:30:06 addons-462156 kubelet[1515]: E1210 22:30:06.502391 1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765405806501943801 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 10 22:30:06 addons-462156 kubelet[1515]: E1210 22:30:06.502436 1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765405806501943801 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 10 22:30:07 addons-462156 kubelet[1515]: I1210 22:30:07.455651 1515 scope.go:117] "RemoveContainer" containerID="0b5cbca6e062454211820a5be3050e40be3e2a32b3fb778286b28674f90e1a45"
Dec 10 22:30:07 addons-462156 kubelet[1515]: I1210 22:30:07.569267 1515 scope.go:117] "RemoveContainer" containerID="c13155ba5d4acd98556ddf7a366f059622b46ad214e9c58836ffb9c756df6c34"
Dec 10 22:30:16 addons-462156 kubelet[1515]: E1210 22:30:16.505118 1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765405816504748727 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 10 22:30:16 addons-462156 kubelet[1515]: E1210 22:30:16.505159 1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765405816504748727 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 10 22:30:26 addons-462156 kubelet[1515]: E1210 22:30:26.507554 1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765405826506987386 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 10 22:30:26 addons-462156 kubelet[1515]: E1210 22:30:26.507584 1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765405826506987386 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 10 22:30:36 addons-462156 kubelet[1515]: E1210 22:30:36.511408 1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765405836511009400 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 10 22:30:36 addons-462156 kubelet[1515]: E1210 22:30:36.511437 1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765405836511009400 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 10 22:30:46 addons-462156 kubelet[1515]: E1210 22:30:46.514308 1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765405846513803711 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 10 22:30:46 addons-462156 kubelet[1515]: E1210 22:30:46.514356 1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765405846513803711 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 10 22:30:56 addons-462156 kubelet[1515]: E1210 22:30:56.516447 1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765405856515995824 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 10 22:30:56 addons-462156 kubelet[1515]: E1210 22:30:56.516820 1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765405856515995824 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 10 22:31:06 addons-462156 kubelet[1515]: E1210 22:31:06.519657 1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765405866519273449 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 10 22:31:06 addons-462156 kubelet[1515]: E1210 22:31:06.519690 1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765405866519273449 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 10 22:31:16 addons-462156 kubelet[1515]: I1210 22:31:16.297097 1515 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Dec 10 22:31:16 addons-462156 kubelet[1515]: E1210 22:31:16.521960 1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765405876521593259 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 10 22:31:16 addons-462156 kubelet[1515]: E1210 22:31:16.522117 1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765405876521593259 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 10 22:31:26 addons-462156 kubelet[1515]: E1210 22:31:26.524941 1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765405886524499644 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 10 22:31:26 addons-462156 kubelet[1515]: E1210 22:31:26.524963 1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765405886524499644 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 10 22:31:34 addons-462156 kubelet[1515]: I1210 22:31:34.297339 1515 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-t84vv" secret="" err="secret \"gcp-auth\" not found"
Dec 10 22:31:36 addons-462156 kubelet[1515]: E1210 22:31:36.527396 1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765405896527091285 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 10 22:31:36 addons-462156 kubelet[1515]: E1210 22:31:36.527416 1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765405896527091285 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 10 22:31:44 addons-462156 kubelet[1515]: I1210 22:31:44.766183 1515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz2dc\" (UniqueName: \"kubernetes.io/projected/326288b8-fceb-4c4a-8017-d11281559671-kube-api-access-rz2dc\") pod \"hello-world-app-5d498dc89-p9mq7\" (UID: \"326288b8-fceb-4c4a-8017-d11281559671\") " pod="default/hello-world-app-5d498dc89-p9mq7"
==> storage-provisioner [ed7661ceb3c5bda624d44999ef4385d224a490e97b78acc1a182006bd21c959b] <==
W1210 22:31:20.600620 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1210 22:31:22.604318 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1210 22:31:22.609534 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1210 22:31:24.612880 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1210 22:31:24.620459 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1210 22:31:26.624649 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1210 22:31:26.629528 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1210 22:31:28.632795 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1210 22:31:28.637889 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1210 22:31:30.641953 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1210 22:31:30.647330 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1210 22:31:32.651439 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1210 22:31:32.656503 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1210 22:31:34.661308 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1210 22:31:34.666357 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1210 22:31:36.669922 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1210 22:31:36.683980 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1210 22:31:38.687520 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1210 22:31:38.693261 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1210 22:31:40.696884 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1210 22:31:40.705527 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1210 22:31:42.708694 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1210 22:31:42.713896 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1210 22:31:44.723211 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1210 22:31:44.752047 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-462156 -n addons-462156
helpers_test.go:270: (dbg) Run: kubectl --context addons-462156 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-p9mq7 ingress-nginx-admission-create-w5dlb ingress-nginx-admission-patch-w8rwc
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run: kubectl --context addons-462156 describe pod hello-world-app-5d498dc89-p9mq7 ingress-nginx-admission-create-w5dlb ingress-nginx-admission-patch-w8rwc
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-462156 describe pod hello-world-app-5d498dc89-p9mq7 ingress-nginx-admission-create-w5dlb ingress-nginx-admission-patch-w8rwc: exit status 1 (72.924746ms)
-- stdout --
Name: hello-world-app-5d498dc89-p9mq7
Namespace: default
Priority: 0
Service Account: default
Node: addons-462156/192.168.39.89
Start Time: Wed, 10 Dec 2025 22:31:44 +0000
Labels: app=hello-world-app
pod-template-hash=5d498dc89
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-world-app-5d498dc89
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rz2dc (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-rz2dc:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2s default-scheduler Successfully assigned default/hello-world-app-5d498dc89-p9mq7 to addons-462156
Normal Pulling 1s kubelet Pulling image "docker.io/kicbase/echo-server:1.0"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-w5dlb" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-w8rwc" not found
** /stderr **
helpers_test.go:288: kubectl --context addons-462156 describe pod hello-world-app-5d498dc89-p9mq7 ingress-nginx-admission-create-w5dlb ingress-nginx-admission-patch-w8rwc: exit status 1
addons_test.go:1055: (dbg) Run: out/minikube-linux-amd64 -p addons-462156 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-462156 addons disable ingress-dns --alsologtostderr -v=1: (1.228029785s)
addons_test.go:1055: (dbg) Run: out/minikube-linux-amd64 -p addons-462156 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-462156 addons disable ingress --alsologtostderr -v=1: (7.751975027s)
--- FAIL: TestAddons/parallel/Ingress (158.39s)