=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run: kubectl --context addons-685870 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run: kubectl --context addons-685870 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run: kubectl --context addons-685870 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [9264c705-e985-4103-9edc-eaa92549670d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [9264c705-e985-4103-9edc-eaa92549670d] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.007820133s
I1213 13:08:50.293002 135234 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run: out/minikube-linux-amd64 -p addons-685870 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-685870 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.578499235s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run: kubectl --context addons-685870 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run: out/minikube-linux-amd64 -p addons-685870 ip
addons_test.go:301: (dbg) Run: nslookup hello-john.test 192.168.39.155
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-685870 -n addons-685870
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-amd64 -p addons-685870 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-685870 logs -n 25: (1.181791496s)
helpers_test.go:261: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ delete │ -p download-only-059438 │ download-only-059438 │ jenkins │ v1.37.0 │ 13 Dec 25 13:06 UTC │ 13 Dec 25 13:06 UTC │
│ start │ --download-only -p binary-mirror-716159 --alsologtostderr --binary-mirror http://127.0.0.1:33249 --driver=kvm2 --container-runtime=crio │ binary-mirror-716159 │ jenkins │ v1.37.0 │ 13 Dec 25 13:06 UTC │ │
│ delete │ -p binary-mirror-716159 │ binary-mirror-716159 │ jenkins │ v1.37.0 │ 13 Dec 25 13:06 UTC │ 13 Dec 25 13:06 UTC │
│ addons │ enable dashboard -p addons-685870 │ addons-685870 │ jenkins │ v1.37.0 │ 13 Dec 25 13:06 UTC │ │
│ addons │ disable dashboard -p addons-685870 │ addons-685870 │ jenkins │ v1.37.0 │ 13 Dec 25 13:06 UTC │ │
│ start │ -p addons-685870 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-685870 │ jenkins │ v1.37.0 │ 13 Dec 25 13:06 UTC │ 13 Dec 25 13:08 UTC │
│ addons │ addons-685870 addons disable volcano --alsologtostderr -v=1 │ addons-685870 │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
│ addons │ addons-685870 addons disable gcp-auth --alsologtostderr -v=1 │ addons-685870 │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
│ addons │ enable headlamp -p addons-685870 --alsologtostderr -v=1 │ addons-685870 │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
│ addons │ addons-685870 addons disable metrics-server --alsologtostderr -v=1 │ addons-685870 │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
│ addons │ addons-685870 addons disable headlamp --alsologtostderr -v=1 │ addons-685870 │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
│ ip │ addons-685870 ip │ addons-685870 │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
│ addons │ addons-685870 addons disable registry --alsologtostderr -v=1 │ addons-685870 │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
│ ssh │ addons-685870 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-685870 │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-685870 │ addons-685870 │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
│ addons │ addons-685870 addons disable registry-creds --alsologtostderr -v=1 │ addons-685870 │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:08 UTC │
│ addons │ addons-685870 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-685870 │ jenkins │ v1.37.0 │ 13 Dec 25 13:08 UTC │ 13 Dec 25 13:09 UTC │
│ ssh │ addons-685870 ssh cat /opt/local-path-provisioner/pvc-ebf86252-4882-4e05-b2c9-1d3fc597ad06_default_test-pvc/file1 │ addons-685870 │ jenkins │ v1.37.0 │ 13 Dec 25 13:09 UTC │ 13 Dec 25 13:09 UTC │
│ addons │ addons-685870 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-685870 │ jenkins │ v1.37.0 │ 13 Dec 25 13:09 UTC │ 13 Dec 25 13:09 UTC │
│ addons │ addons-685870 addons disable yakd --alsologtostderr -v=1 │ addons-685870 │ jenkins │ v1.37.0 │ 13 Dec 25 13:09 UTC │ 13 Dec 25 13:09 UTC │
│ addons │ addons-685870 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-685870 │ jenkins │ v1.37.0 │ 13 Dec 25 13:09 UTC │ 13 Dec 25 13:09 UTC │
│ addons │ addons-685870 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-685870 │ jenkins │ v1.37.0 │ 13 Dec 25 13:09 UTC │ 13 Dec 25 13:09 UTC │
│ addons │ addons-685870 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-685870 │ jenkins │ v1.37.0 │ 13 Dec 25 13:09 UTC │ 13 Dec 25 13:09 UTC │
│ addons │ addons-685870 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-685870 │ jenkins │ v1.37.0 │ 13 Dec 25 13:09 UTC │ 13 Dec 25 13:09 UTC │
│ ip │ addons-685870 ip │ addons-685870 │ jenkins │ v1.37.0 │ 13 Dec 25 13:11 UTC │ 13 Dec 25 13:11 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/13 13:06:04
Running on machine: ubuntu-20-agent-10
Binary: Built with gc go1.25.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1213 13:06:04.923724 136192 out.go:360] Setting OutFile to fd 1 ...
I1213 13:06:04.923976 136192 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:06:04.923985 136192 out.go:374] Setting ErrFile to fd 2...
I1213 13:06:04.923990 136192 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 13:06:04.924244 136192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-131207/.minikube/bin
I1213 13:06:04.924830 136192 out.go:368] Setting JSON to false
I1213 13:06:04.925714 136192 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2905,"bootTime":1765628260,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1213 13:06:04.925769 136192 start.go:143] virtualization: kvm guest
I1213 13:06:04.927463 136192 out.go:179] * [addons-685870] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1213 13:06:04.928660 136192 out.go:179] - MINIKUBE_LOCATION=22122
I1213 13:06:04.928666 136192 notify.go:221] Checking for updates...
I1213 13:06:04.930717 136192 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1213 13:06:04.931857 136192 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22122-131207/kubeconfig
I1213 13:06:04.932918 136192 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-131207/.minikube
I1213 13:06:04.934040 136192 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1213 13:06:04.935173 136192 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1213 13:06:04.936517 136192 driver.go:422] Setting default libvirt URI to qemu:///system
I1213 13:06:04.965680 136192 out.go:179] * Using the kvm2 driver based on user configuration
I1213 13:06:04.966551 136192 start.go:309] selected driver: kvm2
I1213 13:06:04.966566 136192 start.go:927] validating driver "kvm2" against <nil>
I1213 13:06:04.966581 136192 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1213 13:06:04.967295 136192 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1213 13:06:04.967530 136192 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1213 13:06:04.967557 136192 cni.go:84] Creating CNI manager for ""
I1213 13:06:04.967612 136192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1213 13:06:04.967632 136192 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1213 13:06:04.967710 136192 start.go:353] cluster config:
{Name:addons-685870 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-685870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
I1213 13:06:04.967833 136192 iso.go:125] acquiring lock: {Name:mk3b22d147b17c1b05cdcd03e16c3f962e91cdaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1213 13:06:04.969055 136192 out.go:179] * Starting "addons-685870" primary control-plane node in "addons-685870" cluster
I1213 13:06:04.969972 136192 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1213 13:06:04.969998 136192 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
I1213 13:06:04.970005 136192 cache.go:65] Caching tarball of preloaded images
I1213 13:06:04.970097 136192 preload.go:238] Found /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1213 13:06:04.970112 136192 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
I1213 13:06:04.970406 136192 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/config.json ...
I1213 13:06:04.970429 136192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/config.json: {Name:mk87d25a7add1b61736edadb979d71fef18f2d73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 13:06:04.970555 136192 start.go:360] acquireMachinesLock for addons-685870: {Name:mkd3517afd6ad3d581ae9f96a02a4688cf83ce0e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1213 13:06:04.971216 136192 start.go:364] duration metric: took 646.238µs to acquireMachinesLock for "addons-685870"
I1213 13:06:04.971240 136192 start.go:93] Provisioning new machine with config: &{Name:addons-685870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-685870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1213 13:06:04.971292 136192 start.go:125] createHost starting for "" (driver="kvm2")
I1213 13:06:04.973013 136192 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1213 13:06:04.973234 136192 start.go:159] libmachine.API.Create for "addons-685870" (driver="kvm2")
I1213 13:06:04.973264 136192 client.go:173] LocalClient.Create starting
I1213 13:06:04.973336 136192 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem
I1213 13:06:04.995700 136192 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem
I1213 13:06:05.068319 136192 main.go:143] libmachine: creating domain...
I1213 13:06:05.068341 136192 main.go:143] libmachine: creating network...
I1213 13:06:05.069873 136192 main.go:143] libmachine: found existing default network
I1213 13:06:05.070132 136192 main.go:143] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1213 13:06:05.070716 136192 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c109a0}
I1213 13:06:05.070810 136192 main.go:143] libmachine: defining private network:
<network>
<name>mk-addons-685870</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1213 13:06:05.077065 136192 main.go:143] libmachine: creating private network mk-addons-685870 192.168.39.0/24...
I1213 13:06:05.142482 136192 main.go:143] libmachine: private network mk-addons-685870 192.168.39.0/24 created
I1213 13:06:05.142796 136192 main.go:143] libmachine: <network>
<name>mk-addons-685870</name>
<uuid>bfbff2e1-dc1e-4727-b5f5-e11552e7878b</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:02:36:39'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1213 13:06:05.142833 136192 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870 ...
I1213 13:06:05.142853 136192 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22122-131207/.minikube/cache/iso/amd64/minikube-v1.37.0-1765613186-22122-amd64.iso
I1213 13:06:05.142864 136192 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22122-131207/.minikube
I1213 13:06:05.142935 136192 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22122-131207/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22122-131207/.minikube/cache/iso/amd64/minikube-v1.37.0-1765613186-22122-amd64.iso...
I1213 13:06:05.440530 136192 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa...
I1213 13:06:05.628432 136192 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/addons-685870.rawdisk...
I1213 13:06:05.628476 136192 main.go:143] libmachine: Writing magic tar header
I1213 13:06:05.628517 136192 main.go:143] libmachine: Writing SSH key tar header
I1213 13:06:05.628606 136192 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870 ...
I1213 13:06:05.628661 136192 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870
I1213 13:06:05.628703 136192 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870 (perms=drwx------)
I1213 13:06:05.628720 136192 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22122-131207/.minikube/machines
I1213 13:06:05.628732 136192 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22122-131207/.minikube/machines (perms=drwxr-xr-x)
I1213 13:06:05.628743 136192 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22122-131207/.minikube
I1213 13:06:05.628753 136192 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22122-131207/.minikube (perms=drwxr-xr-x)
I1213 13:06:05.628764 136192 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22122-131207
I1213 13:06:05.628773 136192 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22122-131207 (perms=drwxrwxr-x)
I1213 13:06:05.628782 136192 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1213 13:06:05.628791 136192 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1213 13:06:05.628799 136192 main.go:143] libmachine: checking permissions on dir: /home/jenkins
I1213 13:06:05.628809 136192 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1213 13:06:05.628818 136192 main.go:143] libmachine: checking permissions on dir: /home
I1213 13:06:05.628827 136192 main.go:143] libmachine: skipping /home - not owner
I1213 13:06:05.628832 136192 main.go:143] libmachine: defining domain...
I1213 13:06:05.630125 136192 main.go:143] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-685870</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/addons-685870.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-685870'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1213 13:06:05.637172 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:0c:44:09 in network default
I1213 13:06:05.637813 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:05.637832 136192 main.go:143] libmachine: starting domain...
I1213 13:06:05.637837 136192 main.go:143] libmachine: ensuring networks are active...
I1213 13:06:05.638554 136192 main.go:143] libmachine: Ensuring network default is active
I1213 13:06:05.638925 136192 main.go:143] libmachine: Ensuring network mk-addons-685870 is active
I1213 13:06:05.639535 136192 main.go:143] libmachine: getting domain XML...
I1213 13:06:05.640521 136192 main.go:143] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-685870</name>
<uuid>23167541-60b9-4d48-b988-554cdedf00bd</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/addons-685870.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:4c:b9:14'/>
<source network='mk-addons-685870'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:0c:44:09'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1213 13:06:06.924164 136192 main.go:143] libmachine: waiting for domain to start...
I1213 13:06:06.925652 136192 main.go:143] libmachine: domain is now running
I1213 13:06:06.925676 136192 main.go:143] libmachine: waiting for IP...
I1213 13:06:06.926504 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:06.927134 136192 main.go:143] libmachine: no network interface addresses found for domain addons-685870 (source=lease)
I1213 13:06:06.927157 136192 main.go:143] libmachine: trying to list again with source=arp
I1213 13:06:06.927496 136192 main.go:143] libmachine: unable to find current IP address of domain addons-685870 in network mk-addons-685870 (interfaces detected: [])
I1213 13:06:06.927563 136192 retry.go:31] will retry after 261.089812ms: waiting for domain to come up
I1213 13:06:07.190003 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:07.190569 136192 main.go:143] libmachine: no network interface addresses found for domain addons-685870 (source=lease)
I1213 13:06:07.190587 136192 main.go:143] libmachine: trying to list again with source=arp
I1213 13:06:07.190907 136192 main.go:143] libmachine: unable to find current IP address of domain addons-685870 in network mk-addons-685870 (interfaces detected: [])
I1213 13:06:07.190943 136192 retry.go:31] will retry after 306.223214ms: waiting for domain to come up
I1213 13:06:07.498340 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:07.498783 136192 main.go:143] libmachine: no network interface addresses found for domain addons-685870 (source=lease)
I1213 13:06:07.498797 136192 main.go:143] libmachine: trying to list again with source=arp
I1213 13:06:07.499083 136192 main.go:143] libmachine: unable to find current IP address of domain addons-685870 in network mk-addons-685870 (interfaces detected: [])
I1213 13:06:07.499118 136192 retry.go:31] will retry after 402.041961ms: waiting for domain to come up
I1213 13:06:07.902729 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:07.903309 136192 main.go:143] libmachine: no network interface addresses found for domain addons-685870 (source=lease)
I1213 13:06:07.903327 136192 main.go:143] libmachine: trying to list again with source=arp
I1213 13:06:07.903647 136192 main.go:143] libmachine: unable to find current IP address of domain addons-685870 in network mk-addons-685870 (interfaces detected: [])
I1213 13:06:07.903688 136192 retry.go:31] will retry after 372.890146ms: waiting for domain to come up
I1213 13:06:08.278127 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:08.278560 136192 main.go:143] libmachine: no network interface addresses found for domain addons-685870 (source=lease)
I1213 13:06:08.278574 136192 main.go:143] libmachine: trying to list again with source=arp
I1213 13:06:08.278821 136192 main.go:143] libmachine: unable to find current IP address of domain addons-685870 in network mk-addons-685870 (interfaces detected: [])
I1213 13:06:08.278858 136192 retry.go:31] will retry after 744.363927ms: waiting for domain to come up
I1213 13:06:09.025006 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:09.025602 136192 main.go:143] libmachine: no network interface addresses found for domain addons-685870 (source=lease)
I1213 13:06:09.025625 136192 main.go:143] libmachine: trying to list again with source=arp
I1213 13:06:09.025922 136192 main.go:143] libmachine: unable to find current IP address of domain addons-685870 in network mk-addons-685870 (interfaces detected: [])
I1213 13:06:09.025973 136192 retry.go:31] will retry after 604.505944ms: waiting for domain to come up
I1213 13:06:09.631619 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:09.632204 136192 main.go:143] libmachine: no network interface addresses found for domain addons-685870 (source=lease)
I1213 13:06:09.632231 136192 main.go:143] libmachine: trying to list again with source=arp
I1213 13:06:09.632586 136192 main.go:143] libmachine: unable to find current IP address of domain addons-685870 in network mk-addons-685870 (interfaces detected: [])
I1213 13:06:09.632629 136192 retry.go:31] will retry after 862.011279ms: waiting for domain to come up
I1213 13:06:10.495743 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:10.496162 136192 main.go:143] libmachine: no network interface addresses found for domain addons-685870 (source=lease)
I1213 13:06:10.496174 136192 main.go:143] libmachine: trying to list again with source=arp
I1213 13:06:10.496404 136192 main.go:143] libmachine: unable to find current IP address of domain addons-685870 in network mk-addons-685870 (interfaces detected: [])
I1213 13:06:10.496441 136192 retry.go:31] will retry after 1.364913195s: waiting for domain to come up
I1213 13:06:11.862877 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:11.863382 136192 main.go:143] libmachine: no network interface addresses found for domain addons-685870 (source=lease)
I1213 13:06:11.863396 136192 main.go:143] libmachine: trying to list again with source=arp
I1213 13:06:11.863643 136192 main.go:143] libmachine: unable to find current IP address of domain addons-685870 in network mk-addons-685870 (interfaces detected: [])
I1213 13:06:11.863680 136192 retry.go:31] will retry after 1.467338749s: waiting for domain to come up
I1213 13:06:13.333393 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:13.333887 136192 main.go:143] libmachine: no network interface addresses found for domain addons-685870 (source=lease)
I1213 13:06:13.333901 136192 main.go:143] libmachine: trying to list again with source=arp
I1213 13:06:13.334194 136192 main.go:143] libmachine: unable to find current IP address of domain addons-685870 in network mk-addons-685870 (interfaces detected: [])
I1213 13:06:13.334238 136192 retry.go:31] will retry after 1.655012284s: waiting for domain to come up
I1213 13:06:14.990676 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:14.991390 136192 main.go:143] libmachine: no network interface addresses found for domain addons-685870 (source=lease)
I1213 13:06:14.991419 136192 main.go:143] libmachine: trying to list again with source=arp
I1213 13:06:14.991827 136192 main.go:143] libmachine: unable to find current IP address of domain addons-685870 in network mk-addons-685870 (interfaces detected: [])
I1213 13:06:14.991882 136192 retry.go:31] will retry after 2.53356744s: waiting for domain to come up
I1213 13:06:17.528950 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:17.529591 136192 main.go:143] libmachine: no network interface addresses found for domain addons-685870 (source=lease)
I1213 13:06:17.529609 136192 main.go:143] libmachine: trying to list again with source=arp
I1213 13:06:17.529974 136192 main.go:143] libmachine: unable to find current IP address of domain addons-685870 in network mk-addons-685870 (interfaces detected: [])
I1213 13:06:17.530029 136192 retry.go:31] will retry after 3.082423333s: waiting for domain to come up
I1213 13:06:20.613931 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:20.614573 136192 main.go:143] libmachine: domain addons-685870 has current primary IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:20.614592 136192 main.go:143] libmachine: found domain IP: 192.168.39.155
I1213 13:06:20.614601 136192 main.go:143] libmachine: reserving static IP address...
I1213 13:06:20.615143 136192 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-685870", mac: "52:54:00:4c:b9:14", ip: "192.168.39.155"} in network mk-addons-685870
I1213 13:06:20.808288 136192 main.go:143] libmachine: reserved static IP address 192.168.39.155 for domain addons-685870
I1213 13:06:20.808311 136192 main.go:143] libmachine: waiting for SSH...
I1213 13:06:20.808340 136192 main.go:143] libmachine: Getting to WaitForSSH function...
I1213 13:06:20.811159 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:20.811688 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:20.811716 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:20.811928 136192 main.go:143] libmachine: Using SSH client type: native
I1213 13:06:20.812208 136192 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.155 22 <nil> <nil>}
I1213 13:06:20.812221 136192 main.go:143] libmachine: About to run SSH command:
exit 0
I1213 13:06:20.918867 136192 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1213 13:06:20.919302 136192 main.go:143] libmachine: domain creation complete
I1213 13:06:20.920860 136192 machine.go:94] provisionDockerMachine start ...
I1213 13:06:20.923046 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:20.923448 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:20.923482 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:20.923651 136192 main.go:143] libmachine: Using SSH client type: native
I1213 13:06:20.923856 136192 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.155 22 <nil> <nil>}
I1213 13:06:20.923871 136192 main.go:143] libmachine: About to run SSH command:
hostname
I1213 13:06:21.030842 136192 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
I1213 13:06:21.030887 136192 buildroot.go:166] provisioning hostname "addons-685870"
I1213 13:06:21.033915 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:21.034363 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:21.034398 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:21.034591 136192 main.go:143] libmachine: Using SSH client type: native
I1213 13:06:21.034791 136192 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.155 22 <nil> <nil>}
I1213 13:06:21.034803 136192 main.go:143] libmachine: About to run SSH command:
sudo hostname addons-685870 && echo "addons-685870" | sudo tee /etc/hostname
I1213 13:06:21.170243 136192 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-685870
I1213 13:06:21.172969 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:21.173334 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:21.173356 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:21.173506 136192 main.go:143] libmachine: Using SSH client type: native
I1213 13:06:21.173714 136192 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.155 22 <nil> <nil>}
I1213 13:06:21.173730 136192 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-685870' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-685870/g' /etc/hosts;
else
echo '127.0.1.1 addons-685870' | sudo tee -a /etc/hosts;
fi
fi
I1213 13:06:21.291369 136192 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1213 13:06:21.291441 136192 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22122-131207/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-131207/.minikube}
I1213 13:06:21.291473 136192 buildroot.go:174] setting up certificates
I1213 13:06:21.291486 136192 provision.go:84] configureAuth start
I1213 13:06:21.294597 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:21.295021 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:21.295065 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:21.297598 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:21.298048 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:21.298101 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:21.298242 136192 provision.go:143] copyHostCerts
I1213 13:06:21.298336 136192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/ca.pem (1078 bytes)
I1213 13:06:21.298476 136192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/cert.pem (1123 bytes)
I1213 13:06:21.298542 136192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-131207/.minikube/key.pem (1675 bytes)
I1213 13:06:21.299514 136192 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem org=jenkins.addons-685870 san=[127.0.0.1 192.168.39.155 addons-685870 localhost minikube]
I1213 13:06:21.426641 136192 provision.go:177] copyRemoteCerts
I1213 13:06:21.426715 136192 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1213 13:06:21.429502 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:21.429937 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:21.429967 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:21.430133 136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
I1213 13:06:21.514447 136192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1213 13:06:21.545060 136192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1213 13:06:21.575511 136192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1213 13:06:21.605736 136192 provision.go:87] duration metric: took 314.218832ms to configureAuth
I1213 13:06:21.605776 136192 buildroot.go:189] setting minikube options for container-runtime
I1213 13:06:21.606017 136192 config.go:182] Loaded profile config "addons-685870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 13:06:21.608744 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:21.609155 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:21.609182 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:21.609384 136192 main.go:143] libmachine: Using SSH client type: native
I1213 13:06:21.609619 136192 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.155 22 <nil> <nil>}
I1213 13:06:21.609635 136192 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1213 13:06:21.840241 136192 main.go:143] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1213 13:06:21.840269 136192 machine.go:97] duration metric: took 919.388709ms to provisionDockerMachine
I1213 13:06:21.840281 136192 client.go:176] duration metric: took 16.867011394s to LocalClient.Create
I1213 13:06:21.840299 136192 start.go:167] duration metric: took 16.867065987s to libmachine.API.Create "addons-685870"
I1213 13:06:21.840306 136192 start.go:293] postStartSetup for "addons-685870" (driver="kvm2")
I1213 13:06:21.840316 136192 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1213 13:06:21.840378 136192 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1213 13:06:21.843187 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:21.843612 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:21.843641 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:21.843778 136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
I1213 13:06:21.927997 136192 ssh_runner.go:195] Run: cat /etc/os-release
I1213 13:06:21.932971 136192 info.go:137] Remote host: Buildroot 2025.02
I1213 13:06:21.933010 136192 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-131207/.minikube/addons for local assets ...
I1213 13:06:21.933103 136192 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-131207/.minikube/files for local assets ...
I1213 13:06:21.933139 136192 start.go:296] duration metric: took 92.819073ms for postStartSetup
I1213 13:06:21.936391 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:21.936899 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:21.936940 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:21.937321 136192 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/config.json ...
I1213 13:06:21.937541 136192 start.go:128] duration metric: took 16.966236657s to createHost
I1213 13:06:21.940010 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:21.940423 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:21.940447 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:21.940650 136192 main.go:143] libmachine: Using SSH client type: native
I1213 13:06:21.940889 136192 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.155 22 <nil> <nil>}
I1213 13:06:21.940901 136192 main.go:143] libmachine: About to run SSH command:
date +%s.%N
I1213 13:06:22.051446 136192 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765631182.011013931
I1213 13:06:22.051478 136192 fix.go:216] guest clock: 1765631182.011013931
I1213 13:06:22.051489 136192 fix.go:229] Guest: 2025-12-13 13:06:22.011013931 +0000 UTC Remote: 2025-12-13 13:06:21.937556264 +0000 UTC m=+17.062827264 (delta=73.457667ms)
I1213 13:06:22.051516 136192 fix.go:200] guest clock delta is within tolerance: 73.457667ms
I1213 13:06:22.051521 136192 start.go:83] releasing machines lock for "addons-685870", held for 17.080292802s
I1213 13:06:22.054463 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:22.054877 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:22.054902 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:22.055470 136192 ssh_runner.go:195] Run: cat /version.json
I1213 13:06:22.055574 136192 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1213 13:06:22.058820 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:22.058954 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:22.059370 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:22.059442 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:22.059473 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:22.059499 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:22.059679 136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
I1213 13:06:22.059908 136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
I1213 13:06:22.162944 136192 ssh_runner.go:195] Run: systemctl --version
I1213 13:06:22.169634 136192 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1213 13:06:22.333654 136192 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1213 13:06:22.340664 136192 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1213 13:06:22.340748 136192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1213 13:06:22.360722 136192 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1213 13:06:22.360762 136192 start.go:496] detecting cgroup driver to use...
I1213 13:06:22.360854 136192 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1213 13:06:22.383285 136192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1213 13:06:22.400233 136192 docker.go:218] disabling cri-docker service (if available) ...
I1213 13:06:22.400295 136192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1213 13:06:22.417838 136192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1213 13:06:22.434599 136192 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1213 13:06:22.582942 136192 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1213 13:06:22.804266 136192 docker.go:234] disabling docker service ...
I1213 13:06:22.804339 136192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1213 13:06:22.821608 136192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1213 13:06:22.837759 136192 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1213 13:06:23.007854 136192 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1213 13:06:23.153574 136192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1213 13:06:23.171473 136192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1213 13:06:23.197940 136192 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1213 13:06:23.198022 136192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1213 13:06:23.211201 136192 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1213 13:06:23.211282 136192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1213 13:06:23.225666 136192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1213 13:06:23.239067 136192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1213 13:06:23.252244 136192 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1213 13:06:23.265889 136192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1213 13:06:23.279755 136192 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1213 13:06:23.304897 136192 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1213 13:06:23.320466 136192 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1213 13:06:23.334089 136192 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1213 13:06:23.334170 136192 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1213 13:06:23.356279 136192 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1213 13:06:23.371150 136192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1213 13:06:23.516804 136192 ssh_runner.go:195] Run: sudo systemctl restart crio
I1213 13:06:23.623470 136192 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1213 13:06:23.623566 136192 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1213 13:06:23.629854 136192 start.go:564] Will wait 60s for crictl version
I1213 13:06:23.629955 136192 ssh_runner.go:195] Run: which crictl
I1213 13:06:23.634640 136192 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1213 13:06:23.673263 136192 start.go:580] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1213 13:06:23.673442 136192 ssh_runner.go:195] Run: crio --version
I1213 13:06:23.704139 136192 ssh_runner.go:195] Run: crio --version
I1213 13:06:23.736836 136192 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
I1213 13:06:23.742052 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:23.742684 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:23.742723 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:23.743009 136192 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1213 13:06:23.748344 136192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1213 13:06:23.764486 136192 kubeadm.go:884] updating cluster {Name:addons-685870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-685870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1213 13:06:23.764667 136192 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1213 13:06:23.764734 136192 ssh_runner.go:195] Run: sudo crictl images --output json
I1213 13:06:23.795907 136192 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
I1213 13:06:23.795993 136192 ssh_runner.go:195] Run: which lz4
I1213 13:06:23.801229 136192 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1213 13:06:23.807154 136192 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1213 13:06:23.807194 136192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
I1213 13:06:25.052646 136192 crio.go:462] duration metric: took 1.251454659s to copy over tarball
I1213 13:06:25.052756 136192 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1213 13:06:26.548011 136192 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.495221339s)
I1213 13:06:26.548043 136192 crio.go:469] duration metric: took 1.495360464s to extract the tarball
I1213 13:06:26.548056 136192 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1213 13:06:26.584287 136192 ssh_runner.go:195] Run: sudo crictl images --output json
I1213 13:06:26.623988 136192 crio.go:514] all images are preloaded for cri-o runtime.
I1213 13:06:26.624017 136192 cache_images.go:86] Images are preloaded, skipping loading
I1213 13:06:26.624026 136192 kubeadm.go:935] updating node { 192.168.39.155 8443 v1.34.2 crio true true} ...
I1213 13:06:26.624161 136192 kubeadm.go:947] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-685870 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.155
[Install]
config:
{KubernetesVersion:v1.34.2 ClusterName:addons-685870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1213 13:06:26.624242 136192 ssh_runner.go:195] Run: crio config
I1213 13:06:26.672101 136192 cni.go:84] Creating CNI manager for ""
I1213 13:06:26.672125 136192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1213 13:06:26.672143 136192 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1213 13:06:26.672170 136192 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.155 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-685870 NodeName:addons-685870 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.155"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.155 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1213 13:06:26.672292 136192 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.155
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-685870"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.155"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.155"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1213 13:06:26.672360 136192 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
I1213 13:06:26.684860 136192 binaries.go:51] Found k8s binaries, skipping transfer
I1213 13:06:26.685007 136192 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1213 13:06:26.696761 136192 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
I1213 13:06:26.718338 136192 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1213 13:06:26.738782 136192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
I1213 13:06:26.762774 136192 ssh_runner.go:195] Run: grep 192.168.39.155 control-plane.minikube.internal$ /etc/hosts
I1213 13:06:26.767692 136192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.155 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1213 13:06:26.783326 136192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1213 13:06:26.927658 136192 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1213 13:06:26.949320 136192 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870 for IP: 192.168.39.155
I1213 13:06:26.949350 136192 certs.go:195] generating shared ca certs ...
I1213 13:06:26.949368 136192 certs.go:227] acquiring lock for ca certs: {Name:mk4d1e73c1a19abecca2e995e14d97b9ab149024 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 13:06:26.949543 136192 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key
I1213 13:06:27.020585 136192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt ...
I1213 13:06:27.020620 136192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt: {Name:mkc6becf2b5f838ac912d42bc6ce0d833d4aff27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 13:06:27.020809 136192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key ...
I1213 13:06:27.020821 136192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key: {Name:mk210c5828839a72839d87b1daf48c528ece1570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 13:06:27.020906 136192 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key
I1213 13:06:27.055678 136192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.crt ...
I1213 13:06:27.055709 136192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.crt: {Name:mk6ca8839bfaae9762e7287d301b14c26c154a9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 13:06:27.055889 136192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key ...
I1213 13:06:27.055902 136192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key: {Name:mk391bc7627b6c7926cedbd94a6cf416b256163f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 13:06:27.055977 136192 certs.go:257] generating profile certs ...
I1213 13:06:27.056038 136192 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.key
I1213 13:06:27.056060 136192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt with IP's: []
I1213 13:06:27.170089 136192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt ...
I1213 13:06:27.170128 136192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.crt: {Name:mkcd6a7e733f02f497d31820fd8e522c46801a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 13:06:27.170312 136192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.key ...
I1213 13:06:27.170323 136192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/client.key: {Name:mkb49195f8d8cd9ff4872ba3e5202bb1d4127763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 13:06:27.171112 136192 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/apiserver.key.b3eeabe3
I1213 13:06:27.171136 136192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/apiserver.crt.b3eeabe3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.155]
I1213 13:06:27.260412 136192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/apiserver.crt.b3eeabe3 ...
I1213 13:06:27.260448 136192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/apiserver.crt.b3eeabe3: {Name:mkb7f1531d10f1ca11b807c4deeade9593c38873 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 13:06:27.260622 136192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/apiserver.key.b3eeabe3 ...
I1213 13:06:27.260636 136192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/apiserver.key.b3eeabe3: {Name:mk34473adfa4aa41d4f3704f7b241bd13b12328f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 13:06:27.260706 136192 certs.go:382] copying /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/apiserver.crt.b3eeabe3 -> /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/apiserver.crt
I1213 13:06:27.260801 136192 certs.go:386] copying /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/apiserver.key.b3eeabe3 -> /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/apiserver.key
I1213 13:06:27.260858 136192 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/proxy-client.key
I1213 13:06:27.260879 136192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/proxy-client.crt with IP's: []
I1213 13:06:27.353419 136192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/proxy-client.crt ...
I1213 13:06:27.353456 136192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/proxy-client.crt: {Name:mk58b875199fa3fe9d70911d1dcd14e8cb70d824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 13:06:27.353637 136192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/proxy-client.key ...
I1213 13:06:27.353651 136192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/proxy-client.key: {Name:mk09cf351ddd623415115f8a1cb58bfbf0a0e79e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 13:06:27.353830 136192 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca-key.pem (1675 bytes)
I1213 13:06:27.353877 136192 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/ca.pem (1078 bytes)
I1213 13:06:27.353907 136192 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/cert.pem (1123 bytes)
I1213 13:06:27.353931 136192 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-131207/.minikube/certs/key.pem (1675 bytes)
I1213 13:06:27.354671 136192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1213 13:06:27.387139 136192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1213 13:06:27.421827 136192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1213 13:06:27.455062 136192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1213 13:06:27.488938 136192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1213 13:06:27.521731 136192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1213 13:06:27.553005 136192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1213 13:06:27.583824 136192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/profiles/addons-685870/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1213 13:06:27.615856 136192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-131207/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1213 13:06:27.652487 136192 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1213 13:06:27.678188 136192 ssh_runner.go:195] Run: openssl version
I1213 13:06:27.685724 136192 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1213 13:06:27.698872 136192 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1213 13:06:27.713966 136192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1213 13:06:27.719658 136192 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 13:06 /usr/share/ca-certificates/minikubeCA.pem
I1213 13:06:27.719753 136192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1213 13:06:27.727820 136192 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1213 13:06:27.740292 136192 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1213 13:06:27.752732 136192 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1213 13:06:27.757673 136192 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1213 13:06:27.757737 136192 kubeadm.go:401] StartCluster: {Name:addons-685870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22122/minikube-v1.37.0-1765613186-22122-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-685870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1213 13:06:27.757847 136192 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1213 13:06:27.757907 136192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1213 13:06:27.791959 136192 cri.go:89] found id: ""
I1213 13:06:27.792060 136192 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1213 13:06:27.806777 136192 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1213 13:06:27.821202 136192 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1213 13:06:27.834224 136192 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1213 13:06:27.834254 136192 kubeadm.go:158] found existing configuration files:
I1213 13:06:27.834309 136192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1213 13:06:27.848208 136192 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1213 13:06:27.848296 136192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1213 13:06:27.863046 136192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1213 13:06:27.876946 136192 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1213 13:06:27.877019 136192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1213 13:06:27.889876 136192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1213 13:06:27.901456 136192 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1213 13:06:27.901529 136192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1213 13:06:27.914050 136192 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1213 13:06:27.925250 136192 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1213 13:06:27.925324 136192 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1213 13:06:27.937648 136192 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1213 13:06:27.987236 136192 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
I1213 13:06:27.987355 136192 kubeadm.go:319] [preflight] Running pre-flight checks
I1213 13:06:28.088459 136192 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1213 13:06:28.088591 136192 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1213 13:06:28.088745 136192 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1213 13:06:28.098588 136192 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1213 13:06:28.101492 136192 out.go:252] - Generating certificates and keys ...
I1213 13:06:28.102177 136192 kubeadm.go:319] [certs] Using existing ca certificate authority
I1213 13:06:28.102275 136192 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1213 13:06:28.337450 136192 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1213 13:06:28.508840 136192 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1213 13:06:28.738614 136192 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1213 13:06:28.833990 136192 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1213 13:06:29.215739 136192 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1213 13:06:29.215925 136192 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-685870 localhost] and IPs [192.168.39.155 127.0.0.1 ::1]
I1213 13:06:29.498442 136192 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1213 13:06:29.498615 136192 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-685870 localhost] and IPs [192.168.39.155 127.0.0.1 ::1]
I1213 13:06:29.785065 136192 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1213 13:06:29.824816 136192 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1213 13:06:29.892652 136192 kubeadm.go:319] [certs] Generating "sa" key and public key
I1213 13:06:29.892783 136192 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1213 13:06:30.171653 136192 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1213 13:06:30.399034 136192 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1213 13:06:30.557776 136192 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1213 13:06:30.783252 136192 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1213 13:06:31.092971 136192 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1213 13:06:31.093467 136192 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1213 13:06:31.096606 136192 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1213 13:06:31.098383 136192 out.go:252] - Booting up control plane ...
I1213 13:06:31.098509 136192 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1213 13:06:31.098599 136192 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1213 13:06:31.099627 136192 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1213 13:06:31.118680 136192 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1213 13:06:31.119349 136192 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1213 13:06:31.126265 136192 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1213 13:06:31.126535 136192 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1213 13:06:31.126616 136192 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1213 13:06:31.301451 136192 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1213 13:06:31.301600 136192 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1213 13:06:31.802326 136192 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.694057ms
I1213 13:06:31.805204 136192 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1213 13:06:31.805312 136192 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.155:8443/livez
I1213 13:06:31.805436 136192 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1213 13:06:31.805571 136192 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1213 13:06:34.495999 136192 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.693587662s
I1213 13:06:35.809561 136192 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.008098096s
I1213 13:06:38.300135 136192 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501522641s
I1213 13:06:38.319096 136192 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1213 13:06:38.338510 136192 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1213 13:06:38.351406 136192 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1213 13:06:38.351615 136192 kubeadm.go:319] [mark-control-plane] Marking the node addons-685870 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1213 13:06:38.362694 136192 kubeadm.go:319] [bootstrap-token] Using token: 4rz4x4.q7etm0eqh5h03p3i
I1213 13:06:38.364043 136192 out.go:252] - Configuring RBAC rules ...
I1213 13:06:38.364212 136192 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1213 13:06:38.372994 136192 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1213 13:06:38.378997 136192 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1213 13:06:38.384332 136192 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1213 13:06:38.388394 136192 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1213 13:06:38.391844 136192 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1213 13:06:38.709212 136192 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1213 13:06:39.143173 136192 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1213 13:06:39.706526 136192 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1213 13:06:39.709084 136192 kubeadm.go:319]
I1213 13:06:39.709142 136192 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1213 13:06:39.709148 136192 kubeadm.go:319]
I1213 13:06:39.709316 136192 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1213 13:06:39.709344 136192 kubeadm.go:319]
I1213 13:06:39.709369 136192 kubeadm.go:319] mkdir -p $HOME/.kube
I1213 13:06:39.709419 136192 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1213 13:06:39.709517 136192 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1213 13:06:39.709545 136192 kubeadm.go:319]
I1213 13:06:39.709615 136192 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1213 13:06:39.709625 136192 kubeadm.go:319]
I1213 13:06:39.709709 136192 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1213 13:06:39.709722 136192 kubeadm.go:319]
I1213 13:06:39.709785 136192 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1213 13:06:39.709892 136192 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1213 13:06:39.709987 136192 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1213 13:06:39.709998 136192 kubeadm.go:319]
I1213 13:06:39.710130 136192 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1213 13:06:39.710245 136192 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1213 13:06:39.710256 136192 kubeadm.go:319]
I1213 13:06:39.710372 136192 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4rz4x4.q7etm0eqh5h03p3i \
I1213 13:06:39.710523 136192 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:0d7bdf6e2899acb1365169f3e602d91eb327e6d9802bf5e86c346c4733b25f8a \
I1213 13:06:39.710554 136192 kubeadm.go:319] --control-plane
I1213 13:06:39.710561 136192 kubeadm.go:319]
I1213 13:06:39.710684 136192 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1213 13:06:39.710693 136192 kubeadm.go:319]
I1213 13:06:39.710814 136192 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4rz4x4.q7etm0eqh5h03p3i \
I1213 13:06:39.711019 136192 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:0d7bdf6e2899acb1365169f3e602d91eb327e6d9802bf5e86c346c4733b25f8a
I1213 13:06:39.711191 136192 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1213 13:06:39.711208 136192 cni.go:84] Creating CNI manager for ""
I1213 13:06:39.711220 136192 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1213 13:06:39.712890 136192 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1213 13:06:39.714055 136192 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1213 13:06:39.726710 136192 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1213 13:06:39.748747 136192 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1213 13:06:39.748843 136192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 13:06:39.748911 136192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-685870 minikube.k8s.io/updated_at=2025_12_13T13_06_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7 minikube.k8s.io/name=addons-685870 minikube.k8s.io/primary=true
I1213 13:06:39.885985 136192 ops.go:34] apiserver oom_adj: -16
I1213 13:06:39.886134 136192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 13:06:40.386502 136192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 13:06:40.887012 136192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 13:06:41.386282 136192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 13:06:41.886717 136192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 13:06:42.386565 136192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 13:06:42.886664 136192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 13:06:43.386544 136192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 13:06:43.886849 136192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 13:06:44.387093 136192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 13:06:44.887001 136192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 13:06:45.016916 136192 kubeadm.go:1114] duration metric: took 5.268135557s to wait for elevateKubeSystemPrivileges
I1213 13:06:45.016958 136192 kubeadm.go:403] duration metric: took 17.259226192s to StartCluster
I1213 13:06:45.016993 136192 settings.go:142] acquiring lock: {Name:mk721202c5d0c56fb9fb8fa9c13a73c8448f716f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 13:06:45.017145 136192 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22122-131207/kubeconfig
I1213 13:06:45.017555 136192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-131207/kubeconfig: {Name:mk5ec7ec5b8552878ed34d3387da68b813d7cd4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 13:06:45.017791 136192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1213 13:06:45.017828 136192 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1213 13:06:45.017874 136192 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1213 13:06:45.017999 136192 addons.go:70] Setting yakd=true in profile "addons-685870"
I1213 13:06:45.018013 136192 addons.go:70] Setting inspektor-gadget=true in profile "addons-685870"
I1213 13:06:45.018022 136192 addons.go:239] Setting addon yakd=true in "addons-685870"
I1213 13:06:45.018034 136192 addons.go:239] Setting addon inspektor-gadget=true in "addons-685870"
I1213 13:06:45.018059 136192 host.go:66] Checking if "addons-685870" exists ...
I1213 13:06:45.018059 136192 host.go:66] Checking if "addons-685870" exists ...
I1213 13:06:45.018081 136192 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-685870"
I1213 13:06:45.018100 136192 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-685870"
I1213 13:06:45.018103 136192 addons.go:70] Setting registry-creds=true in profile "addons-685870"
I1213 13:06:45.018113 136192 config.go:182] Loaded profile config "addons-685870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 13:06:45.018136 136192 host.go:66] Checking if "addons-685870" exists ...
I1213 13:06:45.018139 136192 addons.go:239] Setting addon registry-creds=true in "addons-685870"
I1213 13:06:45.018125 136192 addons.go:70] Setting ingress=true in profile "addons-685870"
I1213 13:06:45.018165 136192 addons.go:239] Setting addon ingress=true in "addons-685870"
I1213 13:06:45.018175 136192 host.go:66] Checking if "addons-685870" exists ...
I1213 13:06:45.018184 136192 addons.go:70] Setting gcp-auth=true in profile "addons-685870"
I1213 13:06:45.018199 136192 host.go:66] Checking if "addons-685870" exists ...
I1213 13:06:45.018204 136192 mustload.go:66] Loading cluster: addons-685870
I1213 13:06:45.018380 136192 config.go:182] Loaded profile config "addons-685870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 13:06:45.018953 136192 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-685870"
I1213 13:06:45.018979 136192 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-685870"
I1213 13:06:45.019000 136192 addons.go:70] Setting storage-provisioner=true in profile "addons-685870"
I1213 13:06:45.019024 136192 addons.go:239] Setting addon storage-provisioner=true in "addons-685870"
I1213 13:06:45.019049 136192 host.go:66] Checking if "addons-685870" exists ...
I1213 13:06:45.018054 136192 addons.go:70] Setting default-storageclass=true in profile "addons-685870"
I1213 13:06:45.019099 136192 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-685870"
I1213 13:06:45.019172 136192 addons.go:70] Setting cloud-spanner=true in profile "addons-685870"
I1213 13:06:45.019192 136192 addons.go:239] Setting addon cloud-spanner=true in "addons-685870"
I1213 13:06:45.019220 136192 host.go:66] Checking if "addons-685870" exists ...
I1213 13:06:45.019322 136192 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-685870"
I1213 13:06:45.019378 136192 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-685870"
I1213 13:06:45.019401 136192 host.go:66] Checking if "addons-685870" exists ...
I1213 13:06:45.019579 136192 addons.go:70] Setting metrics-server=true in profile "addons-685870"
I1213 13:06:45.019625 136192 addons.go:239] Setting addon metrics-server=true in "addons-685870"
I1213 13:06:45.019670 136192 host.go:66] Checking if "addons-685870" exists ...
I1213 13:06:45.019859 136192 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-685870"
I1213 13:06:45.019898 136192 addons.go:70] Setting registry=true in profile "addons-685870"
I1213 13:06:45.019903 136192 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-685870"
I1213 13:06:45.019865 136192 addons.go:70] Setting ingress-dns=true in profile "addons-685870"
I1213 13:06:45.019937 136192 addons.go:239] Setting addon ingress-dns=true in "addons-685870"
I1213 13:06:45.019941 136192 host.go:66] Checking if "addons-685870" exists ...
I1213 13:06:45.019963 136192 host.go:66] Checking if "addons-685870" exists ...
I1213 13:06:45.020112 136192 addons.go:70] Setting volumesnapshots=true in profile "addons-685870"
I1213 13:06:45.020198 136192 addons.go:239] Setting addon volumesnapshots=true in "addons-685870"
I1213 13:06:45.020236 136192 host.go:66] Checking if "addons-685870" exists ...
I1213 13:06:45.019881 136192 addons.go:70] Setting volcano=true in profile "addons-685870"
I1213 13:06:45.020331 136192 addons.go:239] Setting addon volcano=true in "addons-685870"
I1213 13:06:45.020363 136192 host.go:66] Checking if "addons-685870" exists ...
I1213 13:06:45.019917 136192 addons.go:239] Setting addon registry=true in "addons-685870"
I1213 13:06:45.020408 136192 host.go:66] Checking if "addons-685870" exists ...
I1213 13:06:45.020305 136192 out.go:179] * Verifying Kubernetes components...
I1213 13:06:45.021855 136192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1213 13:06:45.024752 136192 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.5
I1213 13:06:45.024796 136192 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
I1213 13:06:45.024819 136192 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
I1213 13:06:45.025594 136192 host.go:66] Checking if "addons-685870" exists ...
I1213 13:06:45.028394 136192 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-685870"
I1213 13:06:45.028435 136192 host.go:66] Checking if "addons-685870" exists ...
I1213 13:06:45.028395 136192 addons.go:239] Setting addon default-storageclass=true in "addons-685870"
I1213 13:06:45.028841 136192 host.go:66] Checking if "addons-685870" exists ...
I1213 13:06:45.028893 136192 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
I1213 13:06:45.029445 136192 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1213 13:06:45.028991 136192 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
I1213 13:06:45.029586 136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
I1213 13:06:45.029792 136192 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1213 13:06:45.029791 136192 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1213 13:06:45.029832 136192 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1213 13:06:45.029807 136192 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
W1213 13:06:45.029877 136192 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1213 13:06:45.029940 136192 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1213 13:06:45.029959 136192 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
I1213 13:06:45.030454 136192 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1213 13:06:45.030462 136192 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
I1213 13:06:45.031186 136192 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1213 13:06:45.031198 136192 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1213 13:06:45.031203 136192 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1213 13:06:45.031221 136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1213 13:06:45.031244 136192 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1213 13:06:45.031268 136192 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1213 13:06:45.031282 136192 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
I1213 13:06:45.031342 136192 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1213 13:06:45.031908 136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1213 13:06:45.031990 136192 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
I1213 13:06:45.032315 136192 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1213 13:06:45.032325 136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1213 13:06:45.032328 136192 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1213 13:06:45.032819 136192 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1213 13:06:45.032854 136192 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1213 13:06:45.033143 136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1213 13:06:45.032859 136192 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1213 13:06:45.033206 136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1213 13:06:45.032871 136192 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1213 13:06:45.033297 136192 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1213 13:06:45.033512 136192 out.go:179] - Using image docker.io/busybox:stable
I1213 13:06:45.033538 136192 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1213 13:06:45.033565 136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1213 13:06:45.034787 136192 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1213 13:06:45.034912 136192 out.go:179] - Using image docker.io/registry:3.0.0
I1213 13:06:45.035708 136192 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1213 13:06:45.035738 136192 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1213 13:06:45.035831 136192 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
I1213 13:06:45.035846 136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1213 13:06:45.035914 136192 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1213 13:06:45.035933 136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1213 13:06:45.037009 136192 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1213 13:06:45.037028 136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1213 13:06:45.037955 136192 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1213 13:06:45.038533 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.039905 136192 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1213 13:06:45.040413 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:45.040453 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.040542 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.040997 136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
I1213 13:06:45.041493 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.041804 136192 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1213 13:06:45.042291 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:45.042326 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.042968 136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
I1213 13:06:45.043039 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:45.043088 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.043204 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.043715 136192 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1213 13:06:45.043988 136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
I1213 13:06:45.044237 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.044592 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.044853 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:45.044980 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.044984 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.045370 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.045873 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.045990 136192 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1213 13:06:45.046169 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:45.046202 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.046216 136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
I1213 13:06:45.046595 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:45.046633 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.046734 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.046802 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:45.046829 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.046880 136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
I1213 13:06:45.046931 136192 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1213 13:06:45.046947 136192 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1213 13:06:45.047189 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:45.047223 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.047273 136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
I1213 13:06:45.047294 136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
I1213 13:06:45.047344 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:45.047369 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.047520 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.047795 136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
I1213 13:06:45.047813 136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
I1213 13:06:45.048000 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.048251 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:45.048284 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.048673 136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
I1213 13:06:45.049252 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.049332 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.049530 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:45.049561 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.049751 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:45.049785 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.049831 136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
I1213 13:06:45.050116 136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
I1213 13:06:45.050286 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:45.050316 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.050402 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:45.050447 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.050519 136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
I1213 13:06:45.050740 136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
I1213 13:06:45.051414 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.051745 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:45.051762 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:45.051888 136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
W1213 13:06:45.221676 136192 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:58492->192.168.39.155:22: read: connection reset by peer
I1213 13:06:45.221711 136192 retry.go:31] will retry after 134.119975ms: ssh: handshake failed: read tcp 192.168.39.1:58492->192.168.39.155:22: read: connection reset by peer
W1213 13:06:45.264693 136192 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:58504->192.168.39.155:22: read: connection reset by peer
I1213 13:06:45.264737 136192 retry.go:31] will retry after 261.947229ms: ssh: handshake failed: read tcp 192.168.39.1:58504->192.168.39.155:22: read: connection reset by peer
I1213 13:06:45.758712 136192 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1213 13:06:45.758833 136192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1213 13:06:45.805615 136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1213 13:06:45.819620 136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1213 13:06:45.820266 136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1213 13:06:45.915934 136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1213 13:06:45.936689 136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1213 13:06:45.958694 136192 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1213 13:06:45.958747 136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1213 13:06:45.973535 136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
I1213 13:06:46.014641 136192 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1213 13:06:46.014683 136192 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1213 13:06:46.024138 136192 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1213 13:06:46.024162 136192 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1213 13:06:46.050040 136192 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
I1213 13:06:46.050063 136192 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1213 13:06:46.070780 136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1213 13:06:46.143955 136192 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
I1213 13:06:46.143988 136192 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1213 13:06:46.149340 136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1213 13:06:46.230803 136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1213 13:06:46.411602 136192 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1213 13:06:46.411640 136192 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1213 13:06:46.414042 136192 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
I1213 13:06:46.414105 136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1213 13:06:46.419595 136192 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1213 13:06:46.419613 136192 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1213 13:06:46.428373 136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1213 13:06:46.446451 136192 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1213 13:06:46.446475 136192 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1213 13:06:46.502080 136192 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
I1213 13:06:46.502110 136192 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1213 13:06:46.645636 136192 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1213 13:06:46.645664 136192 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1213 13:06:46.684517 136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1213 13:06:46.691013 136192 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1213 13:06:46.691100 136192 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1213 13:06:46.736244 136192 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1213 13:06:46.736281 136192 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1213 13:06:46.771104 136192 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
I1213 13:06:46.771130 136192 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1213 13:06:46.889934 136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1213 13:06:46.966444 136192 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1213 13:06:46.966479 136192 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1213 13:06:46.969563 136192 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
I1213 13:06:46.969582 136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1213 13:06:46.981367 136192 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1213 13:06:46.981390 136192 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1213 13:06:47.286285 136192 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1213 13:06:47.286316 136192 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1213 13:06:47.288360 136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1213 13:06:47.311879 136192 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1213 13:06:47.311905 136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1213 13:06:47.581669 136192 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1213 13:06:47.581697 136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1213 13:06:47.657485 136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1213 13:06:48.072591 136192 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1213 13:06:48.072620 136192 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1213 13:06:48.187153 136192 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.42839983s)
I1213 13:06:48.187220 136192 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.428356041s)
I1213 13:06:48.187243 136192 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1213 13:06:48.188893 136192 node_ready.go:35] waiting up to 6m0s for node "addons-685870" to be "Ready" ...
I1213 13:06:48.197434 136192 node_ready.go:49] node "addons-685870" is "Ready"
I1213 13:06:48.197457 136192 node_ready.go:38] duration metric: took 8.514158ms for node "addons-685870" to be "Ready" ...
I1213 13:06:48.197468 136192 api_server.go:52] waiting for apiserver process to appear ...
I1213 13:06:48.197510 136192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1213 13:06:48.604232 136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.798572172s)
I1213 13:06:48.604353 136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.784052323s)
I1213 13:06:48.604387 136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.784720127s)
I1213 13:06:48.607316 136192 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1213 13:06:48.607342 136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1213 13:06:48.693986 136192 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-685870" context rescaled to 1 replicas
I1213 13:06:48.825808 136192 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1213 13:06:48.825839 136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1213 13:06:49.190080 136192 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1213 13:06:49.190112 136192 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1213 13:06:49.301996 136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1213 13:06:49.990916 136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.074938605s)
I1213 13:06:49.990999 136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.054268788s)
I1213 13:06:50.926969 136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.856137833s)
I1213 13:06:50.927178 136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.953603595s)
I1213 13:06:51.132232 136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.982844342s)
I1213 13:06:51.132280 136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.901438552s)
I1213 13:06:52.479567 136192 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1213 13:06:52.482697 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:52.483179 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:52.483218 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:52.483410 136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
I1213 13:06:52.640648 136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.212236395s)
I1213 13:06:52.640685 136192 addons.go:495] Verifying addon ingress=true in "addons-685870"
I1213 13:06:52.640772 136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.956212932s)
I1213 13:06:52.640857 136192 addons.go:495] Verifying addon registry=true in "addons-685870"
I1213 13:06:52.640866 136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.750879656s)
I1213 13:06:52.640885 136192 addons.go:495] Verifying addon metrics-server=true in "addons-685870"
I1213 13:06:52.640958 136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.352557093s)
I1213 13:06:52.642099 136192 out.go:179] * Verifying ingress addon...
I1213 13:06:52.642104 136192 out.go:179] * Verifying registry addon...
I1213 13:06:52.642641 136192 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-685870 service yakd-dashboard -n yakd-dashboard
I1213 13:06:52.644056 136192 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1213 13:06:52.644237 136192 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1213 13:06:52.715136 136192 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1213 13:06:52.788062 136192 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1213 13:06:52.788099 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:06:52.788067 136192 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1213 13:06:52.788122 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:06:52.795039 136192 addons.go:239] Setting addon gcp-auth=true in "addons-685870"
I1213 13:06:52.795101 136192 host.go:66] Checking if "addons-685870" exists ...
I1213 13:06:52.796787 136192 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1213 13:06:52.799278 136192 main.go:143] libmachine: domain addons-685870 has defined MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:52.799786 136192 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4c:b9:14", ip: ""} in network mk-addons-685870: {Iface:virbr1 ExpiryTime:2025-12-13 14:06:19 +0000 UTC Type:0 Mac:52:54:00:4c:b9:14 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:addons-685870 Clientid:01:52:54:00:4c:b9:14}
I1213 13:06:52.799831 136192 main.go:143] libmachine: domain addons-685870 has defined IP address 192.168.39.155 and MAC address 52:54:00:4c:b9:14 in network mk-addons-685870
I1213 13:06:52.800028 136192 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22122-131207/.minikube/machines/addons-685870/id_rsa Username:docker}
I1213 13:06:52.988279 136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.330743397s)
I1213 13:06:52.988302 136192 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.790778977s)
I1213 13:06:52.988326 136192 api_server.go:72] duration metric: took 7.970460889s to wait for apiserver process to appear ...
I1213 13:06:52.988333 136192 api_server.go:88] waiting for apiserver healthz status ...
W1213 13:06:52.988328 136192 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1213 13:06:52.988354 136192 api_server.go:253] Checking apiserver healthz at https://192.168.39.155:8443/healthz ...
I1213 13:06:52.988357 136192 retry.go:31] will retry after 190.807387ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1213 13:06:53.020180 136192 api_server.go:279] https://192.168.39.155:8443/healthz returned 200:
ok
I1213 13:06:53.040633 136192 api_server.go:141] control plane version: v1.34.2
I1213 13:06:53.040683 136192 api_server.go:131] duration metric: took 52.340104ms to wait for apiserver health ...
I1213 13:06:53.040699 136192 system_pods.go:43] waiting for kube-system pods to appear ...
I1213 13:06:53.098834 136192 system_pods.go:59] 17 kube-system pods found
I1213 13:06:53.098894 136192 system_pods.go:61] "amd-gpu-device-plugin-sl2f8" [10079e75-52ad-4ae2-97a6-8c8a76f8cd2e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1213 13:06:53.098909 136192 system_pods.go:61] "coredns-66bc5c9577-277fw" [12d631bf-ed9a-438c-8e6b-7d606f1c5363] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1213 13:06:53.098924 136192 system_pods.go:61] "coredns-66bc5c9577-ztskd" [5b623913-ee74-409f-a7bc-5fda744c8583] Running
I1213 13:06:53.098934 136192 system_pods.go:61] "etcd-addons-685870" [381c1c3c-6e27-4f86-b43d-455f0cd88783] Running
I1213 13:06:53.098940 136192 system_pods.go:61] "kube-apiserver-addons-685870" [16689b99-d74a-4b25-820e-4975dbaa96bc] Running
I1213 13:06:53.098949 136192 system_pods.go:61] "kube-controller-manager-addons-685870" [13f55bbd-6f0f-4c13-a401-bd5d4719d5f6] Running
I1213 13:06:53.098957 136192 system_pods.go:61] "kube-ingress-dns-minikube" [c7214f42-abd1-4b9a-a6a5-431cff38e423] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1213 13:06:53.098969 136192 system_pods.go:61] "kube-proxy-hlmj5" [431fd1a1-aa6e-4095-af35-72087499f30a] Running
I1213 13:06:53.098979 136192 system_pods.go:61] "kube-scheduler-addons-685870" [87faddcd-32df-403f-a0f8-7e9b8370940c] Running
I1213 13:06:53.098989 136192 system_pods.go:61] "metrics-server-85b7d694d7-xqtfb" [2329f277-682f-41d0-9879-ac4768581afd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1213 13:06:53.099005 136192 system_pods.go:61] "nvidia-device-plugin-daemonset-k6r7t" [fabec6f5-3861-4173-b733-8b09a8eeddfa] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1213 13:06:53.099016 136192 system_pods.go:61] "registry-6b586f9694-4xd6c" [42f338ba-b090-4f81-ad48-bcb9795e19cb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1213 13:06:53.099025 136192 system_pods.go:61] "registry-creds-764b6fb674-lmxzj" [eb2685cd-b67a-4045-a9fd-f3e2480fd2b7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1213 13:06:53.099035 136192 system_pods.go:61] "registry-proxy-ww99f" [b233ab84-669c-4f80-a75e-051ffeafc9b4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1213 13:06:53.099042 136192 system_pods.go:61] "snapshot-controller-7d9fbc56b8-68skh" [b3e3630b-02ff-49ab-a56a-554ddddfc5e9] Pending
I1213 13:06:53.099048 136192 system_pods.go:61] "snapshot-controller-7d9fbc56b8-sxjgh" [6feb5975-075d-4182-acce-5e1e857e5709] Pending
I1213 13:06:53.099058 136192 system_pods.go:61] "storage-provisioner" [bef2cdab-e284-4ee0-b0ab-61d8d1ea5f8e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1213 13:06:53.099068 136192 system_pods.go:74] duration metric: took 58.359055ms to wait for pod list to return data ...
I1213 13:06:53.099116 136192 default_sa.go:34] waiting for default service account to be created ...
I1213 13:06:53.166025 136192 default_sa.go:45] found service account: "default"
I1213 13:06:53.166054 136192 default_sa.go:55] duration metric: took 66.931268ms for default service account to be created ...
I1213 13:06:53.166084 136192 system_pods.go:116] waiting for k8s-apps to be running ...
I1213 13:06:53.179637 136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1213 13:06:53.196486 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:06:53.196701 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:06:53.197178 136192 system_pods.go:86] 17 kube-system pods found
I1213 13:06:53.197216 136192 system_pods.go:89] "amd-gpu-device-plugin-sl2f8" [10079e75-52ad-4ae2-97a6-8c8a76f8cd2e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1213 13:06:53.197232 136192 system_pods.go:89] "coredns-66bc5c9577-277fw" [12d631bf-ed9a-438c-8e6b-7d606f1c5363] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1213 13:06:53.197243 136192 system_pods.go:89] "coredns-66bc5c9577-ztskd" [5b623913-ee74-409f-a7bc-5fda744c8583] Running
I1213 13:06:53.197252 136192 system_pods.go:89] "etcd-addons-685870" [381c1c3c-6e27-4f86-b43d-455f0cd88783] Running
I1213 13:06:53.197258 136192 system_pods.go:89] "kube-apiserver-addons-685870" [16689b99-d74a-4b25-820e-4975dbaa96bc] Running
I1213 13:06:53.197264 136192 system_pods.go:89] "kube-controller-manager-addons-685870" [13f55bbd-6f0f-4c13-a401-bd5d4719d5f6] Running
I1213 13:06:53.197272 136192 system_pods.go:89] "kube-ingress-dns-minikube" [c7214f42-abd1-4b9a-a6a5-431cff38e423] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1213 13:06:53.197277 136192 system_pods.go:89] "kube-proxy-hlmj5" [431fd1a1-aa6e-4095-af35-72087499f30a] Running
I1213 13:06:53.197284 136192 system_pods.go:89] "kube-scheduler-addons-685870" [87faddcd-32df-403f-a0f8-7e9b8370940c] Running
I1213 13:06:53.197298 136192 system_pods.go:89] "metrics-server-85b7d694d7-xqtfb" [2329f277-682f-41d0-9879-ac4768581afd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1213 13:06:53.197310 136192 system_pods.go:89] "nvidia-device-plugin-daemonset-k6r7t" [fabec6f5-3861-4173-b733-8b09a8eeddfa] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1213 13:06:53.197322 136192 system_pods.go:89] "registry-6b586f9694-4xd6c" [42f338ba-b090-4f81-ad48-bcb9795e19cb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1213 13:06:53.197333 136192 system_pods.go:89] "registry-creds-764b6fb674-lmxzj" [eb2685cd-b67a-4045-a9fd-f3e2480fd2b7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1213 13:06:53.197344 136192 system_pods.go:89] "registry-proxy-ww99f" [b233ab84-669c-4f80-a75e-051ffeafc9b4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1213 13:06:53.197352 136192 system_pods.go:89] "snapshot-controller-7d9fbc56b8-68skh" [b3e3630b-02ff-49ab-a56a-554ddddfc5e9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1213 13:06:53.197360 136192 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sxjgh" [6feb5975-075d-4182-acce-5e1e857e5709] Pending
I1213 13:06:53.197368 136192 system_pods.go:89] "storage-provisioner" [bef2cdab-e284-4ee0-b0ab-61d8d1ea5f8e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1213 13:06:53.197379 136192 system_pods.go:126] duration metric: took 31.28651ms to wait for k8s-apps to be running ...
I1213 13:06:53.197394 136192 system_svc.go:44] waiting for kubelet service to be running ....
I1213 13:06:53.197454 136192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1213 13:06:53.663675 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:06:53.663686 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:06:54.001530 136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.699488024s)
I1213 13:06:54.001573 136192 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-685870"
I1213 13:06:54.001583 136192 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.204769167s)
I1213 13:06:54.003535 136192 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1213 13:06:54.003553 136192 out.go:179] * Verifying csi-hostpath-driver addon...
I1213 13:06:54.004867 136192 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1213 13:06:54.005571 136192 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1213 13:06:54.005828 136192 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1213 13:06:54.005845 136192 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1213 13:06:54.024316 136192 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1213 13:06:54.024337 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:06:54.156149 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:06:54.158441 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:06:54.162873 136192 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1213 13:06:54.162898 136192 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1213 13:06:54.222339 136192 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1213 13:06:54.222361 136192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1213 13:06:54.304583 136192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1213 13:06:54.511497 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:06:54.647865 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:06:54.649801 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:06:55.011110 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:06:55.014887 136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.835182604s)
I1213 13:06:55.014919 136192 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.817441745s)
I1213 13:06:55.014947 136192 system_svc.go:56] duration metric: took 1.817548307s WaitForService to wait for kubelet
I1213 13:06:55.014960 136192 kubeadm.go:587] duration metric: took 9.997092185s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1213 13:06:55.015009 136192 node_conditions.go:102] verifying NodePressure condition ...
I1213 13:06:55.020542 136192 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1213 13:06:55.020565 136192 node_conditions.go:123] node cpu capacity is 2
I1213 13:06:55.020583 136192 node_conditions.go:105] duration metric: took 5.563532ms to run NodePressure ...
I1213 13:06:55.020599 136192 start.go:242] waiting for startup goroutines ...
I1213 13:06:55.151301 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:06:55.151891 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:06:55.343942 136192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.039307008s)
I1213 13:06:55.344896 136192 addons.go:495] Verifying addon gcp-auth=true in "addons-685870"
I1213 13:06:55.346425 136192 out.go:179] * Verifying gcp-auth addon...
I1213 13:06:55.348331 136192 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1213 13:06:55.358692 136192 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1213 13:06:55.358714 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:06:55.537275 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:06:55.650226 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:06:55.651356 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:06:55.852507 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:06:56.010170 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:06:56.151227 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:06:56.151397 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:06:56.353046 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:06:56.510296 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:06:56.647887 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:06:56.647956 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:06:56.851861 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:06:57.012745 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:06:57.157190 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:06:57.157190 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:06:57.354696 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:06:57.509661 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:06:57.653170 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:06:57.653611 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:06:57.853007 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:06:58.010553 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:06:58.150665 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:06:58.151615 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:06:58.351526 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:06:58.509705 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:06:58.649939 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:06:58.650304 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:06:58.852219 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:06:59.009742 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:06:59.148669 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:06:59.148712 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:06:59.352544 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:06:59.509773 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:06:59.648661 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:06:59.649456 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:06:59.851343 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:00.009117 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:00.148054 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:00.148390 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:00.353128 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:00.511104 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:00.651233 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:00.654187 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:00.852443 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:01.012029 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:01.149638 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:01.150577 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:01.353218 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:01.511461 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:01.648358 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:01.648448 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:01.851510 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:02.010294 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:02.148115 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:02.148537 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:02.352293 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:02.509943 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:02.648710 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:02.649331 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:02.851543 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:03.010187 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:03.149465 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:03.149587 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:03.352095 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:03.509740 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:03.648635 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:03.648791 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:03.851844 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:04.010731 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:04.148099 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:04.148635 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:04.355407 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:04.510244 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:04.651223 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:04.651865 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:04.854421 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:05.011494 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:05.149950 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:05.150701 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:05.353134 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:05.509741 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:05.650013 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:05.650817 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:05.852058 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:06.009626 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:06.150926 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:06.153517 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:06.352055 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:06.509965 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:06.647981 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:06.647975 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:06.852629 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:07.009189 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:07.151817 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:07.152347 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:07.351527 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:07.518865 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:07.676424 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:07.677201 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:07.852928 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:08.010863 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:08.150405 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:08.150484 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:08.352450 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:08.509904 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:08.647826 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:08.648158 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:08.852398 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:09.009602 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:09.147221 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:09.147607 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:09.352388 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:09.509723 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:09.647738 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:09.649217 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:09.853585 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:10.009622 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:10.148005 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:10.148702 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:10.351997 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:10.511342 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:10.648991 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:10.649657 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:10.852484 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:11.012328 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:11.147835 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:11.149835 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:11.354616 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:11.508798 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:11.652098 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:11.652798 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:11.854212 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:12.013648 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:12.149883 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:12.150277 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:12.351104 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:12.511725 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:12.649542 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:12.650421 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:12.852539 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:13.009491 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:13.154764 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:13.155021 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:13.353522 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:13.511028 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:13.647556 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:13.648257 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:13.857204 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:14.012599 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:14.149303 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:14.151010 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:14.357164 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:14.510149 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:14.651082 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:14.651690 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:14.854129 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:15.010864 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:15.147004 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:15.147408 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:15.351887 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:15.512484 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:15.651253 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:15.651568 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:15.984739 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:16.011025 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:16.151286 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:16.154515 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:16.354988 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:16.509242 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:16.649902 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:16.651455 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:16.851571 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:17.009766 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:17.148613 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:17.148792 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:17.351990 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:17.510458 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:17.649631 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:17.649714 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:17.852531 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:18.009715 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:18.149246 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:18.150564 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:18.352041 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:18.514711 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:18.649881 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:18.650286 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:18.852498 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:19.009608 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:19.147855 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:19.148166 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:19.351966 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:19.509274 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:19.648563 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:19.648796 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:19.851528 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:20.009420 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:20.151172 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:20.151981 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:20.352396 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:20.513697 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:20.649425 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:20.650104 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:20.852403 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:21.011488 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:21.150471 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:21.151164 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:21.358179 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:21.510023 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:21.655217 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:21.656432 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:21.852499 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:22.013949 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:22.154867 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:22.156430 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:22.353059 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:22.510482 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:22.656467 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:22.656492 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:22.851338 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:23.012595 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:23.150321 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:23.153429 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:23.352322 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:23.511107 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:23.649478 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:23.649628 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:23.851186 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:24.009591 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:24.149058 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:24.149191 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:24.353595 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:24.508855 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:24.648282 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:24.649084 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:24.853158 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:25.010985 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:25.151588 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:25.154023 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:25.352148 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:25.509588 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:25.658096 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:25.658421 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:25.853451 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:26.010305 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:26.150985 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:26.152091 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:26.352358 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:26.513597 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:26.649842 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:26.651346 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:26.852790 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:27.008857 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:27.148981 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:27.149564 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:27.398515 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:27.510403 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:27.649742 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:27.651064 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:27.853304 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:28.011360 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:28.155006 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 13:07:28.155030 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:28.354104 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:28.510052 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:28.648839 136192 kapi.go:107] duration metric: took 36.004779434s to wait for kubernetes.io/minikube-addons=registry ...
I1213 13:07:28.649980 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:28.852182 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:29.009724 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:29.148338 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:29.352524 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:29.508656 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:29.650898 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:29.853513 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:30.008794 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:30.147594 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:30.352559 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:30.508661 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:30.647670 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:30.851784 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:31.010170 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:31.153251 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:31.351209 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:31.511100 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:31.648182 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:31.855390 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:32.010547 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:32.149398 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:32.353164 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:32.510474 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:32.649868 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:32.851933 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:33.012855 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:33.151049 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:33.463863 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:33.513615 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:33.648792 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:33.853970 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:34.010395 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:34.148494 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:34.352056 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:34.510025 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:34.648176 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:34.853103 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:35.009840 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:35.148010 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:35.352402 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:35.510657 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:35.648806 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:35.852331 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:36.010692 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:36.155458 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:36.353691 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:36.509757 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:36.650065 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:36.871954 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:37.012342 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:37.148540 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:37.351249 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:37.509713 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:37.648425 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:37.852840 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:38.009419 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:38.152399 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:38.351801 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:38.514383 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:38.648994 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:38.854541 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:39.012033 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:39.155835 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:39.354084 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:39.510435 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:39.651527 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:39.853464 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:40.012628 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:40.151181 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:40.354051 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:40.511683 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:40.650235 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:40.856471 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:41.011513 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:41.150258 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:41.353389 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:41.514415 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:41.649250 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:41.853343 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:42.012469 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:42.148354 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:42.353771 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:42.510488 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:42.648051 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:42.854214 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:43.014509 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:43.151363 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:43.354238 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:43.521742 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:43.648551 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:43.855215 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:44.012880 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:44.150380 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:44.354838 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:44.510034 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:44.653354 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:44.854230 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:45.010383 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:45.153582 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:45.364491 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:45.511344 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:45.648328 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:45.867233 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:46.012521 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:46.149137 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:46.354891 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:46.510696 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:46.656491 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:46.853675 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:47.019658 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:47.152860 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:47.353033 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:47.509805 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:47.649046 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:47.856557 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:48.010442 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:48.384429 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:48.384540 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:48.514197 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:48.648939 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:48.854594 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:49.011044 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:49.150188 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:49.353056 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:49.510849 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:49.650318 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:49.851031 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:50.011011 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:50.163965 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:50.353383 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:50.515330 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:50.652424 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:50.853827 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:51.012138 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:51.149280 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:51.353874 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:51.509633 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:51.648870 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:51.855016 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:52.011520 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:52.149658 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:52.355005 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:52.515616 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:52.650348 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:52.852031 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:53.011962 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:53.149203 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:53.352541 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:53.509623 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:53.654488 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:53.852567 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:54.010197 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:54.151792 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:54.354337 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:54.513460 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:54.649722 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:54.852193 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:55.015541 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:55.149988 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:55.352683 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:55.510315 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:55.649065 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:55.853703 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:56.008751 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:56.149641 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:56.354590 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:56.511389 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:56.653820 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:56.854397 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:57.009805 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:57.152817 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:57.354685 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:57.509293 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:57.648877 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:57.855009 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:58.014001 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 13:07:58.154946 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:58.359163 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:58.514124 136192 kapi.go:107] duration metric: took 1m4.508546084s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1213 13:07:58.652029 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:58.860114 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:59.150386 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:59.352308 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:07:59.649206 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:07:59.854278 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:08:00.203179 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:08:00.357204 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:08:00.650279 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:08:00.854683 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:08:01.148993 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:08:01.355169 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:08:01.648419 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:08:01.852261 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:08:02.147782 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:08:02.352595 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:08:02.651457 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:08:02.853402 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:08:03.148572 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:08:03.353393 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:08:03.648809 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:08:03.855134 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:08:04.149820 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:08:04.353128 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:08:04.651601 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:08:04.852434 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:08:05.148456 136192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 13:08:05.351834 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:08:05.651340 136192 kapi.go:107] duration metric: took 1m13.00710089s to wait for app.kubernetes.io/name=ingress-nginx ...
I1213 13:08:05.851597 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:08:06.352640 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:08:06.852551 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:08:07.353244 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:08:07.853800 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:08:08.354923 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:08:08.852868 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:08:09.352601 136192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 13:08:09.852375 136192 kapi.go:107] duration metric: took 1m14.504042081s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1213 13:08:09.854004 136192 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-685870 cluster.
I1213 13:08:09.855062 136192 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1213 13:08:09.856107 136192 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1213 13:08:09.857205 136192 out.go:179] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, default-storageclass, ingress-dns, storage-provisioner, registry-creds, inspektor-gadget, nvidia-device-plugin, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I1213 13:08:09.858576 136192 addons.go:530] duration metric: took 1m24.840708372s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin default-storageclass ingress-dns storage-provisioner registry-creds inspektor-gadget nvidia-device-plugin storage-provisioner-rancher metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I1213 13:08:09.858623 136192 start.go:247] waiting for cluster config update ...
I1213 13:08:09.858648 136192 start.go:256] writing updated cluster config ...
I1213 13:08:09.858966 136192 ssh_runner.go:195] Run: rm -f paused
I1213 13:08:09.864114 136192 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1213 13:08:09.867207 136192 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ztskd" in "kube-system" namespace to be "Ready" or be gone ...
I1213 13:08:09.872181 136192 pod_ready.go:94] pod "coredns-66bc5c9577-ztskd" is "Ready"
I1213 13:08:09.872212 136192 pod_ready.go:86] duration metric: took 4.984923ms for pod "coredns-66bc5c9577-ztskd" in "kube-system" namespace to be "Ready" or be gone ...
I1213 13:08:09.874516 136192 pod_ready.go:83] waiting for pod "etcd-addons-685870" in "kube-system" namespace to be "Ready" or be gone ...
I1213 13:08:09.877800 136192 pod_ready.go:94] pod "etcd-addons-685870" is "Ready"
I1213 13:08:09.877829 136192 pod_ready.go:86] duration metric: took 3.29268ms for pod "etcd-addons-685870" in "kube-system" namespace to be "Ready" or be gone ...
I1213 13:08:09.879977 136192 pod_ready.go:83] waiting for pod "kube-apiserver-addons-685870" in "kube-system" namespace to be "Ready" or be gone ...
I1213 13:08:09.883832 136192 pod_ready.go:94] pod "kube-apiserver-addons-685870" is "Ready"
I1213 13:08:09.883857 136192 pod_ready.go:86] duration metric: took 3.859ms for pod "kube-apiserver-addons-685870" in "kube-system" namespace to be "Ready" or be gone ...
I1213 13:08:09.885621 136192 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-685870" in "kube-system" namespace to be "Ready" or be gone ...
I1213 13:08:10.267700 136192 pod_ready.go:94] pod "kube-controller-manager-addons-685870" is "Ready"
I1213 13:08:10.267746 136192 pod_ready.go:86] duration metric: took 382.106967ms for pod "kube-controller-manager-addons-685870" in "kube-system" namespace to be "Ready" or be gone ...
I1213 13:08:10.467891 136192 pod_ready.go:83] waiting for pod "kube-proxy-hlmj5" in "kube-system" namespace to be "Ready" or be gone ...
I1213 13:08:10.868424 136192 pod_ready.go:94] pod "kube-proxy-hlmj5" is "Ready"
I1213 13:08:10.868464 136192 pod_ready.go:86] duration metric: took 400.533636ms for pod "kube-proxy-hlmj5" in "kube-system" namespace to be "Ready" or be gone ...
I1213 13:08:11.068552 136192 pod_ready.go:83] waiting for pod "kube-scheduler-addons-685870" in "kube-system" namespace to be "Ready" or be gone ...
I1213 13:08:11.468278 136192 pod_ready.go:94] pod "kube-scheduler-addons-685870" is "Ready"
I1213 13:08:11.468318 136192 pod_ready.go:86] duration metric: took 399.732643ms for pod "kube-scheduler-addons-685870" in "kube-system" namespace to be "Ready" or be gone ...
I1213 13:08:11.468336 136192 pod_ready.go:40] duration metric: took 1.604195099s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1213 13:08:11.517399 136192 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
I1213 13:08:11.519022 136192 out.go:179] * Done! kubectl is now configured to use "addons-685870" cluster and "default" namespace by default
==> CRI-O <==
Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.110510307Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe479ff2-c705-4dea-bb47-81709990b04e name=/runtime.v1.RuntimeService/ListContainers
Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.110933798Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe479ff2-c705-4dea-bb47-81709990b04e name=/runtime.v1.RuntimeService/ListContainers
Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.111302610Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6753fd85fcd829c813398ffdda13c27e0224e5ae8b0c212ea8c363b3d2555de8,PodSandboxId:0a7c55d23ca61b7aa4a13730e1600d23288bb5f6c21f98b36fb9f5efba23c869,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765631324333940208,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9264c705-e985-4103-9edc-eaa92549670d,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e12ed54bc37e8a810b2bcf11d2d5520536632e7ceef8b58e4e00ed0e45a1d793,PodSandboxId:a6a14dca33627c6c3a76de9eef91be13cef2750d4410e39091bf3c11dea67042,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765631296208329080,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28996d9e-2b5f-4e3c-b142-b2a3308dd12c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1cdcb8f278f0b0ac210e671a5109dbe3dbf17cb852b675fca79a8e4d650ea7,PodSandboxId:c613d524d2eda396777a0be2c90ef5cec072a50f9c114d5f737863b9d4f0c230,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765631284681922883,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-wnvpr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ba3cb1ed-ba8f-4a57-9ebc-48e9b1ca789c,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ecee2b5c3135febd56e2581fe7ce863b76c7becb1b2d6149bf37854b7b6c86a9,PodSandboxId:7e1023281f49143b11f685483f4f2992aa0ab69e0fc2d5a80c6695b5087209cf,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765631263481446227,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-df6ws,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e937ef53-6752-4fcf-b14d-7fc9c61e2822,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a27f69be2a0c8e8058841bcd906baff248a5298620bd9ad010dc18e106e3efb,PodSandboxId:8b98c26ee365e368565e4e9000251a027fdd0dad66543d8b5f56c8b994794ff1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765631263354804192,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fj4wb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d0daeae6-7a6e-4329-b7af-d85916aa1733,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c79972183e309ffabfb8539ffd6369672b8fe80156f4b37324c71183efe2377,PodSandboxId:084b3e2e81bd52415040bf457b5d939ecf54f9a31d55b29510a6a96ee98e7187,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765631237512834278,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7214f42-abd1-4b9a-a6a5-431cff38e423,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd3cedf932f2846fc9e7438c16ed4535e6a732725e690627fd53090376f70a1,PodSandboxId:2c84a2744aa6caedca2a609b6fb8b82940155c69a4493023410cb07c9c55d58e,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765631221292006391,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sl2f8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10079e75-52ad-4ae2-97a6-8c8a76f8cd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e857c13b18225a46dc09a8a369409eba68c7bc0f71370e487890c74c2f44da,PodSandboxId:86d0f1c9640695acd2f02a327a38911e1a52146632e001a91dda0c909a04784f,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765631212020303563,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bef2cdab-e284-4ee0-b0ab-61d8d1ea5f8e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beae040dada503d5a4152c855ee36f7436586cc0774ea652facc88eeabe45737,PodSandboxId:522a168346f8b9daebb875e37a2312f79a93b92c1d1ed2e8eda4b3bff11a258f,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765631205923967488,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ztskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b623913-ee74-409f-a7bc-5fda744c8583,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:630eea4f4f055f1d0825770d1d580310b0b51f058e1711e54a62394836b247dd,PodSandboxId:faabe943f3b6cbc80c3fa80de4eed0665e1b012378e8819d103befad959ba547,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765631204355252376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hlmj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431fd1a1-aa6e-4095-af35-72087499f30a,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0421f431c6c33d3c43ff62376128999b83037865e0519d591d4ba2d20f130697,PodSandboxId:588ae9df297adf54f1470c4ccab2809b0280469689bbcb95fa0b580a116340be,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765631192805207604,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ae31683d0b65ea196103472695e50ec,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:309ba569e8bc0584154808bbc0a9005c3d74c04dd4028e42972ad622398ee1a0,PodSandboxId:f776546868eaf2ee8a108614fa2cade081056bc694d87b89fb3ab75090c698d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765631192824698744,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f40f59c68bc1ab457c8ac3efb96ad62a,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70281e43646fa99dc87975cf5eea957a37772d62d01261defa66be92bdfb79a1,PodSandboxId:49619928c3b2b45beae713de444e4195b9a2b72d26de5b60ece9ac629088dab2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765631192787344560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d07133aa7ef6081dfb5c
33d1096c7d7,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f159075538cb5e1ad4bda439214bba87babd48e4b08ada0c68a49789f835cd6,PodSandboxId:5a3edbbcfca145fdfcf8c1ed964191cfaeb4e723ff86e644b452724a8b51b386,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765631192776222969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b4ccda251d6114c3d01a6ec894549d,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe479ff2-c705-4dea-bb47-81709990b04e name=/runtime.v1.RuntimeService/ListContainers
Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.136668689Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/1.0" file="docker/docker_client.go:631"
Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.160014970Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8780b81a-5135-40c9-bf6f-9de1c3e79542 name=/runtime.v1.RuntimeService/Version
Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.160215589Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8780b81a-5135-40c9-bf6f-9de1c3e79542 name=/runtime.v1.RuntimeService/Version
Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.161896867Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fb72d533-c0c1-4f26-973c-b7c533873bee name=/runtime.v1.ImageService/ImageFsInfo
Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.163339853Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765631466163304111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb72d533-c0c1-4f26-973c-b7c533873bee name=/runtime.v1.ImageService/ImageFsInfo
Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.164343841Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=20b28bab-1ba4-4c96-84fe-0fb9cc8186d1 name=/runtime.v1.RuntimeService/ListContainers
Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.164423141Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=20b28bab-1ba4-4c96-84fe-0fb9cc8186d1 name=/runtime.v1.RuntimeService/ListContainers
Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.164841568Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6753fd85fcd829c813398ffdda13c27e0224e5ae8b0c212ea8c363b3d2555de8,PodSandboxId:0a7c55d23ca61b7aa4a13730e1600d23288bb5f6c21f98b36fb9f5efba23c869,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765631324333940208,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9264c705-e985-4103-9edc-eaa92549670d,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e12ed54bc37e8a810b2bcf11d2d5520536632e7ceef8b58e4e00ed0e45a1d793,PodSandboxId:a6a14dca33627c6c3a76de9eef91be13cef2750d4410e39091bf3c11dea67042,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765631296208329080,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28996d9e-2b5f-4e3c-b142-b2a3308dd12c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1cdcb8f278f0b0ac210e671a5109dbe3dbf17cb852b675fca79a8e4d650ea7,PodSandboxId:c613d524d2eda396777a0be2c90ef5cec072a50f9c114d5f737863b9d4f0c230,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765631284681922883,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-wnvpr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ba3cb1ed-ba8f-4a57-9ebc-48e9b1ca789c,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ecee2b5c3135febd56e2581fe7ce863b76c7becb1b2d6149bf37854b7b6c86a9,PodSandboxId:7e1023281f49143b11f685483f4f2992aa0ab69e0fc2d5a80c6695b5087209cf,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765631263481446227,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-df6ws,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e937ef53-6752-4fcf-b14d-7fc9c61e2822,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a27f69be2a0c8e8058841bcd906baff248a5298620bd9ad010dc18e106e3efb,PodSandboxId:8b98c26ee365e368565e4e9000251a027fdd0dad66543d8b5f56c8b994794ff1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765631263354804192,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fj4wb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d0daeae6-7a6e-4329-b7af-d85916aa1733,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c79972183e309ffabfb8539ffd6369672b8fe80156f4b37324c71183efe2377,PodSandboxId:084b3e2e81bd52415040bf457b5d939ecf54f9a31d55b29510a6a96ee98e7187,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765631237512834278,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7214f42-abd1-4b9a-a6a5-431cff38e423,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd3cedf932f2846fc9e7438c16ed4535e6a732725e690627fd53090376f70a1,PodSandboxId:2c84a2744aa6caedca2a609b6fb8b82940155c69a4493023410cb07c9c55d58e,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765631221292006391,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sl2f8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10079e75-52ad-4ae2-97a6-8c8a76f8cd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e857c13b18225a46dc09a8a369409eba68c7bc0f71370e487890c74c2f44da,PodSandboxId:86d0f1c9640695acd2f02a327a38911e1a52146632e001a91dda0c909a04784f,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765631212020303563,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bef2cdab-e284-4ee0-b0ab-61d8d1ea5f8e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beae040dada503d5a4152c855ee36f7436586cc0774ea652facc88eeabe45737,PodSandboxId:522a168346f8b9daebb875e37a2312f79a93b92c1d1ed2e8eda4b3bff11a258f,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765631205923967488,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ztskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b623913-ee74-409f-a7bc-5fda744c8583,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:630eea4f4f055f1d0825770d1d580310b0b51f058e1711e54a62394836b247dd,PodSandboxId:faabe943f3b6cbc80c3fa80de4eed0665e1b012378e8819d103befad959ba547,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765631204355252376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hlmj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431fd1a1-aa6e-4095-af35-72087499f30a,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0421f431c6c33d3c43ff62376128999b83037865e0519d591d4ba2d20f130697,PodSandboxId:588ae9df297adf54f1470c4ccab2809b0280469689bbcb95fa0b580a116340be,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765631192805207604,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ae31683d0b65ea196103472695e50ec,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:309ba569e8bc0584154808bbc0a9005c3d74c04dd4028e42972ad622398ee1a0,PodSandboxId:f776546868eaf2ee8a108614fa2cade081056bc694d87b89fb3ab75090c698d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765631192824698744,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f40f59c68bc1ab457c8ac3efb96ad62a,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70281e43646fa99dc87975cf5eea957a37772d62d01261defa66be92bdfb79a1,PodSandboxId:49619928c3b2b45beae713de444e4195b9a2b72d26de5b60ece9ac629088dab2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765631192787344560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d07133aa7ef6081dfb5c
33d1096c7d7,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f159075538cb5e1ad4bda439214bba87babd48e4b08ada0c68a49789f835cd6,PodSandboxId:5a3edbbcfca145fdfcf8c1ed964191cfaeb4e723ff86e644b452724a8b51b386,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765631192776222969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b4ccda251d6114c3d01a6ec894549d,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=20b28bab-1ba4-4c96-84fe-0fb9cc8186d1 name=/runtime.v1.RuntimeService/ListContainers
Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.198542296Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff511621-2318-4645-b6b9-9f2086acb6f0 name=/runtime.v1.RuntimeService/Version
Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.198909538Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff511621-2318-4645-b6b9-9f2086acb6f0 name=/runtime.v1.RuntimeService/Version
Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.200880419Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=521dd6f3-8790-458b-91b1-e728bde74085 name=/runtime.v1.ImageService/ImageFsInfo
Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.202618565Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765631466202584553,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=521dd6f3-8790-458b-91b1-e728bde74085 name=/runtime.v1.ImageService/ImageFsInfo
Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.203695814Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=77dae93d-abb9-4d2b-9acf-35d24bde4375 name=/runtime.v1.RuntimeService/ListContainers
Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.203985440Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=77dae93d-abb9-4d2b-9acf-35d24bde4375 name=/runtime.v1.RuntimeService/ListContainers
Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.204322782Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6753fd85fcd829c813398ffdda13c27e0224e5ae8b0c212ea8c363b3d2555de8,PodSandboxId:0a7c55d23ca61b7aa4a13730e1600d23288bb5f6c21f98b36fb9f5efba23c869,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765631324333940208,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9264c705-e985-4103-9edc-eaa92549670d,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e12ed54bc37e8a810b2bcf11d2d5520536632e7ceef8b58e4e00ed0e45a1d793,PodSandboxId:a6a14dca33627c6c3a76de9eef91be13cef2750d4410e39091bf3c11dea67042,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765631296208329080,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28996d9e-2b5f-4e3c-b142-b2a3308dd12c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1cdcb8f278f0b0ac210e671a5109dbe3dbf17cb852b675fca79a8e4d650ea7,PodSandboxId:c613d524d2eda396777a0be2c90ef5cec072a50f9c114d5f737863b9d4f0c230,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765631284681922883,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-wnvpr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ba3cb1ed-ba8f-4a57-9ebc-48e9b1ca789c,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ecee2b5c3135febd56e2581fe7ce863b76c7becb1b2d6149bf37854b7b6c86a9,PodSandboxId:7e1023281f49143b11f685483f4f2992aa0ab69e0fc2d5a80c6695b5087209cf,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765631263481446227,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-df6ws,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e937ef53-6752-4fcf-b14d-7fc9c61e2822,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a27f69be2a0c8e8058841bcd906baff248a5298620bd9ad010dc18e106e3efb,PodSandboxId:8b98c26ee365e368565e4e9000251a027fdd0dad66543d8b5f56c8b994794ff1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765631263354804192,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fj4wb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d0daeae6-7a6e-4329-b7af-d85916aa1733,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c79972183e309ffabfb8539ffd6369672b8fe80156f4b37324c71183efe2377,PodSandboxId:084b3e2e81bd52415040bf457b5d939ecf54f9a31d55b29510a6a96ee98e7187,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765631237512834278,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7214f42-abd1-4b9a-a6a5-431cff38e423,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd3cedf932f2846fc9e7438c16ed4535e6a732725e690627fd53090376f70a1,PodSandboxId:2c84a2744aa6caedca2a609b6fb8b82940155c69a4493023410cb07c9c55d58e,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765631221292006391,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sl2f8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10079e75-52ad-4ae2-97a6-8c8a76f8cd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e857c13b18225a46dc09a8a369409eba68c7bc0f71370e487890c74c2f44da,PodSandboxId:86d0f1c9640695acd2f02a327a38911e1a52146632e001a91dda0c909a04784f,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765631212020303563,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bef2cdab-e284-4ee0-b0ab-61d8d1ea5f8e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beae040dada503d5a4152c855ee36f7436586cc0774ea652facc88eeabe45737,PodSandboxId:522a168346f8b9daebb875e37a2312f79a93b92c1d1ed2e8eda4b3bff11a258f,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765631205923967488,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ztskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b623913-ee74-409f-a7bc-5fda744c8583,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:630eea4f4f055f1d0825770d1d580310b0b51f058e1711e54a62394836b247dd,PodSandboxId:faabe943f3b6cbc80c3fa80de4eed0665e1b012378e8819d103befad959ba547,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765631204355252376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hlmj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431fd1a1-aa6e-4095-af35-72087499f30a,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0421f431c6c33d3c43ff62376128999b83037865e0519d591d4ba2d20f130697,PodSandboxId:588ae9df297adf54f1470c4ccab2809b0280469689bbcb95fa0b580a116340be,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765631192805207604,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ae31683d0b65ea196103472695e50ec,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:309ba569e8bc0584154808bbc0a9005c3d74c04dd4028e42972ad622398ee1a0,PodSandboxId:f776546868eaf2ee8a108614fa2cade081056bc694d87b89fb3ab75090c698d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765631192824698744,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f40f59c68bc1ab457c8ac3efb96ad62a,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70281e43646fa99dc87975cf5eea957a37772d62d01261defa66be92bdfb79a1,PodSandboxId:49619928c3b2b45beae713de444e4195b9a2b72d26de5b60ece9ac629088dab2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765631192787344560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d07133aa7ef6081dfb5c
33d1096c7d7,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f159075538cb5e1ad4bda439214bba87babd48e4b08ada0c68a49789f835cd6,PodSandboxId:5a3edbbcfca145fdfcf8c1ed964191cfaeb4e723ff86e644b452724a8b51b386,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765631192776222969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b4ccda251d6114c3d01a6ec894549d,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=77dae93d-abb9-4d2b-9acf-35d24bde4375 name=/runtime.v1.RuntimeService/ListContainers
Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.236648335Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6e4e47dd-69e9-4ccc-86af-e8101f7198a3 name=/runtime.v1.RuntimeService/Version
Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.236985438Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6e4e47dd-69e9-4ccc-86af-e8101f7198a3 name=/runtime.v1.RuntimeService/Version
Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.238385639Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a224084f-95f1-4e5e-aaeb-989f8d94abdb name=/runtime.v1.ImageService/ImageFsInfo
Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.240636031Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765631466240600651,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a224084f-95f1-4e5e-aaeb-989f8d94abdb name=/runtime.v1.ImageService/ImageFsInfo
Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.241943198Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6f91276b-b1ae-4e59-8b4e-0a4fcde38dbb name=/runtime.v1.RuntimeService/ListContainers
Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.242227734Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6f91276b-b1ae-4e59-8b4e-0a4fcde38dbb name=/runtime.v1.RuntimeService/ListContainers
Dec 13 13:11:06 addons-685870 crio[813]: time="2025-12-13 13:11:06.242840561Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6753fd85fcd829c813398ffdda13c27e0224e5ae8b0c212ea8c363b3d2555de8,PodSandboxId:0a7c55d23ca61b7aa4a13730e1600d23288bb5f6c21f98b36fb9f5efba23c869,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765631324333940208,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9264c705-e985-4103-9edc-eaa92549670d,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e12ed54bc37e8a810b2bcf11d2d5520536632e7ceef8b58e4e00ed0e45a1d793,PodSandboxId:a6a14dca33627c6c3a76de9eef91be13cef2750d4410e39091bf3c11dea67042,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765631296208329080,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28996d9e-2b5f-4e3c-b142-b2a3308dd12c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1cdcb8f278f0b0ac210e671a5109dbe3dbf17cb852b675fca79a8e4d650ea7,PodSandboxId:c613d524d2eda396777a0be2c90ef5cec072a50f9c114d5f737863b9d4f0c230,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765631284681922883,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-wnvpr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ba3cb1ed-ba8f-4a57-9ebc-48e9b1ca789c,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ecee2b5c3135febd56e2581fe7ce863b76c7becb1b2d6149bf37854b7b6c86a9,PodSandboxId:7e1023281f49143b11f685483f4f2992aa0ab69e0fc2d5a80c6695b5087209cf,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765631263481446227,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-df6ws,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e937ef53-6752-4fcf-b14d-7fc9c61e2822,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a27f69be2a0c8e8058841bcd906baff248a5298620bd9ad010dc18e106e3efb,PodSandboxId:8b98c26ee365e368565e4e9000251a027fdd0dad66543d8b5f56c8b994794ff1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765631263354804192,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fj4wb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d0daeae6-7a6e-4329-b7af-d85916aa1733,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c79972183e309ffabfb8539ffd6369672b8fe80156f4b37324c71183efe2377,PodSandboxId:084b3e2e81bd52415040bf457b5d939ecf54f9a31d55b29510a6a96ee98e7187,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765631237512834278,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7214f42-abd1-4b9a-a6a5-431cff38e423,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd3cedf932f2846fc9e7438c16ed4535e6a732725e690627fd53090376f70a1,PodSandboxId:2c84a2744aa6caedca2a609b6fb8b82940155c69a4493023410cb07c9c55d58e,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765631221292006391,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sl2f8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10079e75-52ad-4ae2-97a6-8c8a76f8cd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e857c13b18225a46dc09a8a369409eba68c7bc0f71370e487890c74c2f44da,PodSandboxId:86d0f1c9640695acd2f02a327a38911e1a52146632e001a91dda0c909a04784f,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765631212020303563,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bef2cdab-e284-4ee0-b0ab-61d8d1ea5f8e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beae040dada503d5a4152c855ee36f7436586cc0774ea652facc88eeabe45737,PodSandboxId:522a168346f8b9daebb875e37a2312f79a93b92c1d1ed2e8eda4b3bff11a258f,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765631205923967488,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ztskd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b623913-ee74-409f-a7bc-5fda744c8583,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:630eea4f4f055f1d0825770d1d580310b0b51f058e1711e54a62394836b247dd,PodSandboxId:faabe943f3b6cbc80c3fa80de4eed0665e1b012378e8819d103befad959ba547,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765631204355252376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hlmj5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431fd1a1-aa6e-4095-af35-72087499f30a,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0421f431c6c33d3c43ff62376128999b83037865e0519d591d4ba2d20f130697,PodSandboxId:588ae9df297adf54f1470c4ccab2809b0280469689bbcb95fa0b580a116340be,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765631192805207604,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ae31683d0b65ea196103472695e50ec,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:309ba569e8bc0584154808bbc0a9005c3d74c04dd4028e42972ad622398ee1a0,PodSandboxId:f776546868eaf2ee8a108614fa2cade081056bc694d87b89fb3ab75090c698d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765631192824698744,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f40f59c68bc1ab457c8ac3efb96ad62a,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70281e43646fa99dc87975cf5eea957a37772d62d01261defa66be92bdfb79a1,PodSandboxId:49619928c3b2b45beae713de444e4195b9a2b72d26de5b60ece9ac629088dab2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765631192787344560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d07133aa7ef6081dfb5c
33d1096c7d7,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f159075538cb5e1ad4bda439214bba87babd48e4b08ada0c68a49789f835cd6,PodSandboxId:5a3edbbcfca145fdfcf8c1ed964191cfaeb4e723ff86e644b452724a8b51b386,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765631192776222969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-addons-685870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77b4ccda251d6114c3d01a6ec894549d,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6f91276b-b1ae-4e59-8b4e-0a4fcde38dbb name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
6753fd85fcd82 public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff 2 minutes ago Running nginx 0 0a7c55d23ca61 nginx default
e12ed54bc37e8 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 2 minutes ago Running busybox 0 a6a14dca33627 busybox default
eb1cdcb8f278f registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad 3 minutes ago Running controller 0 c613d524d2eda ingress-nginx-controller-85d4c799dd-wnvpr ingress-nginx
ecee2b5c3135f registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285 3 minutes ago Exited patch 0 7e1023281f491 ingress-nginx-admission-patch-df6ws ingress-nginx
2a27f69be2a0c registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285 3 minutes ago Exited create 0 8b98c26ee365e ingress-nginx-admission-create-fj4wb ingress-nginx
4c79972183e30 docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 3 minutes ago Running minikube-ingress-dns 0 084b3e2e81bd5 kube-ingress-dns-minikube kube-system
0bd3cedf932f2 docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 2c84a2744aa6c amd-gpu-device-plugin-sl2f8 kube-system
24e857c13b182 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 86d0f1c964069 storage-provisioner kube-system
beae040dada50 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969 4 minutes ago Running coredns 0 522a168346f8b coredns-66bc5c9577-ztskd kube-system
630eea4f4f055 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45 4 minutes ago Running kube-proxy 0 faabe943f3b6c kube-proxy-hlmj5 kube-system
309ba569e8bc0 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952 4 minutes ago Running kube-scheduler 0 f776546868eaf kube-scheduler-addons-685870 kube-system
0421f431c6c33 a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1 4 minutes ago Running etcd 0 588ae9df297ad etcd-addons-685870 kube-system
70281e43646fa 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8 4 minutes ago Running kube-controller-manager 0 49619928c3b2b kube-controller-manager-addons-685870 kube-system
1f159075538cb a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85 4 minutes ago Running kube-apiserver 0 5a3edbbcfca14 kube-apiserver-addons-685870 kube-system
==> coredns [beae040dada503d5a4152c855ee36f7436586cc0774ea652facc88eeabe45737] <==
[INFO] 10.244.0.8:59086 - 63200 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000294388s
[INFO] 10.244.0.8:59086 - 64765 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00006935s
[INFO] 10.244.0.8:59086 - 60941 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000065538s
[INFO] 10.244.0.8:59086 - 14505 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00008602s
[INFO] 10.244.0.8:59086 - 24433 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000055498s
[INFO] 10.244.0.8:59086 - 7702 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000082981s
[INFO] 10.244.0.8:59086 - 54077 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000077065s
[INFO] 10.244.0.8:54771 - 7096 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000129029s
[INFO] 10.244.0.8:54771 - 7408 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000111744s
[INFO] 10.244.0.8:57211 - 41080 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081259s
[INFO] 10.244.0.8:57211 - 40819 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000108437s
[INFO] 10.244.0.8:58564 - 5912 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000086482s
[INFO] 10.244.0.8:58564 - 6175 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000091929s
[INFO] 10.244.0.8:57556 - 17781 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000095932s
[INFO] 10.244.0.8:57556 - 18005 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000101347s
[INFO] 10.244.0.23:40332 - 2710 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000586979s
[INFO] 10.244.0.23:55247 - 29803 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000188394s
[INFO] 10.244.0.23:60868 - 52799 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000116926s
[INFO] 10.244.0.23:52338 - 64823 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000184193s
[INFO] 10.244.0.23:53118 - 54286 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000089319s
[INFO] 10.244.0.23:60500 - 58517 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00012023s
[INFO] 10.244.0.23:57103 - 50773 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001072713s
[INFO] 10.244.0.23:54850 - 34747 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.00376797s
[INFO] 10.244.0.28:59699 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000269278s
[INFO] 10.244.0.28:58636 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000258104s
==> describe nodes <==
Name: addons-685870
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-685870
kubernetes.io/os=linux
minikube.k8s.io/commit=142a8bd7cb3f031b5f72a3965bb211dc77d9e1a7
minikube.k8s.io/name=addons-685870
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_13T13_06_39_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-685870
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 13 Dec 2025 13:06:35 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-685870
AcquireTime: <unset>
RenewTime: Sat, 13 Dec 2025 13:11:03 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 13 Dec 2025 13:09:12 +0000 Sat, 13 Dec 2025 13:06:33 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 13 Dec 2025 13:09:12 +0000 Sat, 13 Dec 2025 13:06:33 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 13 Dec 2025 13:09:12 +0000 Sat, 13 Dec 2025 13:06:33 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 13 Dec 2025 13:09:12 +0000 Sat, 13 Dec 2025 13:06:40 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.155
Hostname: addons-685870
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
System Info:
Machine ID: 2316754160b94d48b988554cdedf00bd
System UUID: 23167541-60b9-4d48-b988-554cdedf00bd
Boot ID: 9ac98f12-9267-4c27-875b-a5744a9fc8da
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.34.2
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (13 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m54s
default hello-world-app-5d498dc89-mvsgp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 1s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m27s
ingress-nginx ingress-nginx-controller-85d4c799dd-wnvpr 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m14s
kube-system amd-gpu-device-plugin-sl2f8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m19s
kube-system coredns-66bc5c9577-ztskd 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m22s
kube-system etcd-addons-685870 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m27s
kube-system kube-apiserver-addons-685870 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m27s
kube-system kube-controller-manager-addons-685870 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m27s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m17s
kube-system kube-proxy-hlmj5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m23s
kube-system kube-scheduler-addons-685870 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m27s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m17s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m21s kube-proxy
Normal Starting 4m27s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m27s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m27s kubelet Node addons-685870 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m27s kubelet Node addons-685870 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m27s kubelet Node addons-685870 status is now: NodeHasSufficientPID
Normal NodeReady 4m26s kubelet Node addons-685870 status is now: NodeReady
Normal RegisteredNode 4m23s node-controller Node addons-685870 event: Registered Node addons-685870 in Controller
==> dmesg <==
[ +0.766202] kauditd_printk_skb: 332 callbacks suppressed
[ +0.364400] kauditd_printk_skb: 409 callbacks suppressed
[Dec13 13:07] kauditd_printk_skb: 268 callbacks suppressed
[ +6.808503] kauditd_printk_skb: 5 callbacks suppressed
[ +9.411617] kauditd_printk_skb: 11 callbacks suppressed
[ +5.965243] kauditd_printk_skb: 26 callbacks suppressed
[ +10.357546] kauditd_printk_skb: 32 callbacks suppressed
[ +8.122341] kauditd_printk_skb: 26 callbacks suppressed
[ +2.193663] kauditd_printk_skb: 192 callbacks suppressed
[ +5.044751] kauditd_printk_skb: 80 callbacks suppressed
[ +0.681574] kauditd_printk_skb: 99 callbacks suppressed
[Dec13 13:08] kauditd_printk_skb: 56 callbacks suppressed
[ +0.000069] kauditd_printk_skb: 38 callbacks suppressed
[ +5.183224] kauditd_printk_skb: 47 callbacks suppressed
[ +5.885828] kauditd_printk_skb: 22 callbacks suppressed
[ +2.952614] kauditd_printk_skb: 74 callbacks suppressed
[ +1.254581] kauditd_printk_skb: 124 callbacks suppressed
[ +0.743337] kauditd_printk_skb: 70 callbacks suppressed
[ +3.017748] kauditd_printk_skb: 103 callbacks suppressed
[Dec13 13:09] kauditd_printk_skb: 86 callbacks suppressed
[ +0.709359] kauditd_printk_skb: 145 callbacks suppressed
[ +0.695662] kauditd_printk_skb: 80 callbacks suppressed
[ +7.670594] kauditd_printk_skb: 71 callbacks suppressed
[ +10.663587] kauditd_printk_skb: 42 callbacks suppressed
[Dec13 13:11] kauditd_printk_skb: 10 callbacks suppressed
==> etcd [0421f431c6c33d3c43ff62376128999b83037865e0519d591d4ba2d20f130697] <==
{"level":"warn","ts":"2025-12-13T13:07:48.372050Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"226.831867ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-13T13:07:48.372069Z","caller":"traceutil/trace.go:172","msg":"trace[1817313856] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1050; }","duration":"226.854414ms","start":"2025-12-13T13:07:48.145209Z","end":"2025-12-13T13:07:48.372063Z","steps":["trace[1817313856] 'range keys from in-memory index tree' (duration: 226.78691ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-13T13:07:48.372080Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"162.120916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
{"level":"info","ts":"2025-12-13T13:07:48.372112Z","caller":"traceutil/trace.go:172","msg":"trace[1432393094] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1050; }","duration":"162.159609ms","start":"2025-12-13T13:07:48.209945Z","end":"2025-12-13T13:07:48.372105Z","steps":["trace[1432393094] 'range keys from in-memory index tree' (duration: 162.021402ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-13T13:07:48.372147Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"224.443519ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-13T13:07:48.372160Z","caller":"traceutil/trace.go:172","msg":"trace[423326700] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1050; }","duration":"224.456286ms","start":"2025-12-13T13:07:48.147700Z","end":"2025-12-13T13:07:48.372156Z","steps":["trace[423326700] 'range keys from in-memory index tree' (duration: 224.395137ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-13T13:07:48.372225Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"222.402894ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-13T13:07:48.372236Z","caller":"traceutil/trace.go:172","msg":"trace[760549697] range","detail":"{range_begin:/registry/prioritylevelconfigurations; range_end:; response_count:0; response_revision:1050; }","duration":"222.414706ms","start":"2025-12-13T13:07:48.149818Z","end":"2025-12-13T13:07:48.372233Z","steps":["trace[760549697] 'range keys from in-memory index tree' (duration: 222.360662ms)"],"step_count":1}
{"level":"info","ts":"2025-12-13T13:08:00.188639Z","caller":"traceutil/trace.go:172","msg":"trace[968734986] linearizableReadLoop","detail":"{readStateIndex:1152; appliedIndex:1152; }","duration":"120.891062ms","start":"2025-12-13T13:08:00.067730Z","end":"2025-12-13T13:08:00.188621Z","steps":["trace[968734986] 'read index received' (duration: 120.884304ms)","trace[968734986] 'applied index is now lower than readState.Index' (duration: 5.889µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-13T13:08:00.188759Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.029428ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csistoragecapacities\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-13T13:08:00.188779Z","caller":"traceutil/trace.go:172","msg":"trace[1659867722] range","detail":"{range_begin:/registry/csistoragecapacities; range_end:; response_count:0; response_revision:1119; }","duration":"121.062945ms","start":"2025-12-13T13:08:00.067710Z","end":"2025-12-13T13:08:00.188773Z","steps":["trace[1659867722] 'agreement among raft nodes before linearized reading' (duration: 120.991989ms)"],"step_count":1}
{"level":"info","ts":"2025-12-13T13:08:00.190351Z","caller":"traceutil/trace.go:172","msg":"trace[1619384361] transaction","detail":"{read_only:false; response_revision:1120; number_of_response:1; }","duration":"184.853714ms","start":"2025-12-13T13:08:00.005487Z","end":"2025-12-13T13:08:00.190341Z","steps":["trace[1619384361] 'process raft request' (duration: 184.502586ms)"],"step_count":1}
{"level":"info","ts":"2025-12-13T13:08:08.707407Z","caller":"traceutil/trace.go:172","msg":"trace[2146368593] transaction","detail":"{read_only:false; response_revision:1147; number_of_response:1; }","duration":"159.312268ms","start":"2025-12-13T13:08:08.548082Z","end":"2025-12-13T13:08:08.707395Z","steps":["trace[2146368593] 'process raft request' (duration: 159.230951ms)"],"step_count":1}
{"level":"info","ts":"2025-12-13T13:08:36.417586Z","caller":"traceutil/trace.go:172","msg":"trace[1160569190] linearizableReadLoop","detail":"{readStateIndex:1365; appliedIndex:1365; }","duration":"277.928298ms","start":"2025-12-13T13:08:36.139641Z","end":"2025-12-13T13:08:36.417570Z","steps":["trace[1160569190] 'read index received' (duration: 277.883654ms)","trace[1160569190] 'applied index is now lower than readState.Index' (duration: 38.452µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-13T13:08:36.417686Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"278.030099ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-13T13:08:36.417704Z","caller":"traceutil/trace.go:172","msg":"trace[1349741951] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1325; }","duration":"278.062617ms","start":"2025-12-13T13:08:36.139637Z","end":"2025-12-13T13:08:36.417699Z","steps":["trace[1349741951] 'agreement among raft nodes before linearized reading' (duration: 278.002858ms)"],"step_count":1}
{"level":"info","ts":"2025-12-13T13:08:36.417949Z","caller":"traceutil/trace.go:172","msg":"trace[1673974490] transaction","detail":"{read_only:false; response_revision:1326; number_of_response:1; }","duration":"284.552335ms","start":"2025-12-13T13:08:36.133390Z","end":"2025-12-13T13:08:36.417942Z","steps":["trace[1673974490] 'process raft request' (duration: 284.477349ms)"],"step_count":1}
{"level":"info","ts":"2025-12-13T13:08:38.301429Z","caller":"traceutil/trace.go:172","msg":"trace[723761713] linearizableReadLoop","detail":"{readStateIndex:1368; appliedIndex:1368; }","duration":"213.250557ms","start":"2025-12-13T13:08:38.088160Z","end":"2025-12-13T13:08:38.301410Z","steps":["trace[723761713] 'read index received' (duration: 213.245186ms)","trace[723761713] 'applied index is now lower than readState.Index' (duration: 4.466µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-13T13:08:38.301598Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"213.423968ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-13T13:08:38.301619Z","caller":"traceutil/trace.go:172","msg":"trace[1651104628] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1328; }","duration":"213.464316ms","start":"2025-12-13T13:08:38.088149Z","end":"2025-12-13T13:08:38.301613Z","steps":["trace[1651104628] 'agreement among raft nodes before linearized reading' (duration: 213.34678ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-13T13:08:38.301786Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"162.175376ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-13T13:08:38.301801Z","caller":"traceutil/trace.go:172","msg":"trace[1907552154] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1328; }","duration":"162.19275ms","start":"2025-12-13T13:08:38.139604Z","end":"2025-12-13T13:08:38.301796Z","steps":["trace[1907552154] 'agreement among raft nodes before linearized reading' (duration: 162.16347ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-13T13:08:38.301932Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.230706ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
{"level":"info","ts":"2025-12-13T13:08:38.301957Z","caller":"traceutil/trace.go:172","msg":"trace[143458453] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1328; }","duration":"126.254342ms","start":"2025-12-13T13:08:38.175697Z","end":"2025-12-13T13:08:38.301951Z","steps":["trace[143458453] 'agreement among raft nodes before linearized reading' (duration: 126.185318ms)"],"step_count":1}
{"level":"info","ts":"2025-12-13T13:08:48.935623Z","caller":"traceutil/trace.go:172","msg":"trace[881859974] transaction","detail":"{read_only:false; response_revision:1429; number_of_response:1; }","duration":"141.627573ms","start":"2025-12-13T13:08:48.793982Z","end":"2025-12-13T13:08:48.935609Z","steps":["trace[881859974] 'process raft request' (duration: 141.493027ms)"],"step_count":1}
==> kernel <==
13:11:06 up 4 min, 0 users, load average: 0.54, 0.74, 0.37
Linux addons-685870 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Dec 13 11:18:23 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [1f159075538cb5e1ad4bda439214bba87babd48e4b08ada0c68a49789f835cd6] <==
W1213 13:07:25.631892 1 handler_proxy.go:99] no RequestInfo found in the context
E1213 13:07:25.631969 1 controller.go:146] "Unhandled Error" err=<
Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I1213 13:07:25.653143 1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E1213 13:08:23.278200 1 conn.go:339] Error on socket receive: read tcp 192.168.39.155:8443->192.168.39.1:40120: use of closed network connection
I1213 13:08:32.521353 1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.217.207"}
I1213 13:08:39.077453 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
I1213 13:08:39.269032 1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.110.21"}
I1213 13:08:49.166457 1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I1213 13:09:11.203697 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1213 13:09:11.203826 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1213 13:09:11.237504 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1213 13:09:11.237590 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1213 13:09:11.252410 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1213 13:09:11.252460 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1213 13:09:11.273991 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1213 13:09:11.274020 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W1213 13:09:12.237607 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W1213 13:09:12.274558 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W1213 13:09:12.390364 1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
E1213 13:09:19.886222 1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
I1213 13:09:26.656036 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I1213 13:11:05.126520 1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.96.39"}
==> kube-controller-manager [70281e43646fa99dc87975cf5eea957a37772d62d01261defa66be92bdfb79a1] <==
I1213 13:09:18.667017 1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
E1213 13:09:20.514817 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 13:09:20.516058 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 13:09:21.710074 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 13:09:21.711238 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 13:09:29.940159 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 13:09:29.941136 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 13:09:32.812881 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 13:09:32.813834 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 13:09:33.003359 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 13:09:33.004356 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 13:09:46.436900 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 13:09:46.437857 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 13:09:47.010669 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 13:09:47.011863 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 13:09:54.325788 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 13:09:54.327005 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 13:10:23.235285 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 13:10:23.236335 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 13:10:23.607193 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 13:10:23.608415 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 13:10:34.898142 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 13:10:34.899034 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 13:11:06.546423 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 13:11:06.549020 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
==> kube-proxy [630eea4f4f055f1d0825770d1d580310b0b51f058e1711e54a62394836b247dd] <==
I1213 13:06:44.587340 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1213 13:06:44.687487 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1213 13:06:44.687630 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.155"]
E1213 13:06:44.687694 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1213 13:06:44.734136 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1213 13:06:44.734261 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1213 13:06:44.734304 1 server_linux.go:132] "Using iptables Proxier"
I1213 13:06:44.746384 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1213 13:06:44.746639 1 server.go:527] "Version info" version="v1.34.2"
I1213 13:06:44.746679 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1213 13:06:44.751482 1 config.go:200] "Starting service config controller"
I1213 13:06:44.751804 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1213 13:06:44.751849 1 config.go:106] "Starting endpoint slice config controller"
I1213 13:06:44.751854 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1213 13:06:44.751903 1 config.go:403] "Starting serviceCIDR config controller"
I1213 13:06:44.751922 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1213 13:06:44.752869 1 config.go:309] "Starting node config controller"
I1213 13:06:44.752896 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1213 13:06:44.752903 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1213 13:06:44.852619 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1213 13:06:44.852742 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1213 13:06:44.852763 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-scheduler [309ba569e8bc0584154808bbc0a9005c3d74c04dd4028e42972ad622398ee1a0] <==
E1213 13:06:35.802213 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1213 13:06:35.802227 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1213 13:06:35.802379 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1213 13:06:35.802399 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1213 13:06:35.803783 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1213 13:06:36.630820 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1213 13:06:36.651338 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1213 13:06:36.684227 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1213 13:06:36.708960 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1213 13:06:36.838862 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1213 13:06:36.860507 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1213 13:06:36.875175 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1213 13:06:36.880683 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1213 13:06:36.921225 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1213 13:06:37.001879 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1213 13:06:37.050619 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1213 13:06:37.068650 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1213 13:06:37.122456 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1213 13:06:37.143086 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1213 13:06:37.149814 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1213 13:06:37.293733 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1213 13:06:37.334237 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1213 13:06:37.339682 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1213 13:06:37.399050 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
I1213 13:06:39.675227 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Dec 13 13:09:39 addons-685870 kubelet[1499]: E1213 13:09:39.397882 1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765631379397456113 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 13:09:39 addons-685870 kubelet[1499]: E1213 13:09:39.397904 1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765631379397456113 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 13:09:40 addons-685870 kubelet[1499]: I1213 13:09:40.652310 1499 scope.go:117] "RemoveContainer" containerID="2dc739a2dec434c4f022ab40a2bd6308017a3dd4e26c72b3ee97b6d771585dac"
Dec 13 13:09:40 addons-685870 kubelet[1499]: I1213 13:09:40.766656 1499 scope.go:117] "RemoveContainer" containerID="ad32e5c2130e95920569a26187dcf953b523385c634895b5a424cc551478823d"
Dec 13 13:09:40 addons-685870 kubelet[1499]: I1213 13:09:40.892210 1499 scope.go:117] "RemoveContainer" containerID="51ce09200f4c86647e1461b8c1602837cef9dabffc683ad4a814f42c7f88c31a"
Dec 13 13:09:41 addons-685870 kubelet[1499]: I1213 13:09:41.008335 1499 scope.go:117] "RemoveContainer" containerID="354338c937f694f67f6120f5d3ca5ad4bf590b9ec67506ee8dc7cb61b837078c"
Dec 13 13:09:49 addons-685870 kubelet[1499]: E1213 13:09:49.402034 1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765631389401413721 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 13:09:49 addons-685870 kubelet[1499]: E1213 13:09:49.402075 1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765631389401413721 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 13:09:59 addons-685870 kubelet[1499]: E1213 13:09:59.404921 1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765631399404499127 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 13:09:59 addons-685870 kubelet[1499]: E1213 13:09:59.404963 1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765631399404499127 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 13:10:09 addons-685870 kubelet[1499]: E1213 13:10:09.409227 1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765631409408715537 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 13:10:09 addons-685870 kubelet[1499]: E1213 13:10:09.409267 1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765631409408715537 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 13:10:19 addons-685870 kubelet[1499]: E1213 13:10:19.412243 1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765631419411814343 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 13:10:19 addons-685870 kubelet[1499]: E1213 13:10:19.412501 1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765631419411814343 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 13:10:24 addons-685870 kubelet[1499]: I1213 13:10:24.080917 1499 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-sl2f8" secret="" err="secret \"gcp-auth\" not found"
Dec 13 13:10:29 addons-685870 kubelet[1499]: E1213 13:10:29.415332 1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765631429414918172 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 13:10:29 addons-685870 kubelet[1499]: E1213 13:10:29.415410 1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765631429414918172 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 13:10:39 addons-685870 kubelet[1499]: E1213 13:10:39.419101 1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765631439418718821 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 13:10:39 addons-685870 kubelet[1499]: E1213 13:10:39.419144 1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765631439418718821 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 13:10:49 addons-685870 kubelet[1499]: E1213 13:10:49.421888 1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765631449421410809 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 13:10:49 addons-685870 kubelet[1499]: E1213 13:10:49.421913 1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765631449421410809 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 13:10:55 addons-685870 kubelet[1499]: I1213 13:10:55.081772 1499 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Dec 13 13:10:59 addons-685870 kubelet[1499]: E1213 13:10:59.424205 1499 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765631459423737135 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 13:10:59 addons-685870 kubelet[1499]: E1213 13:10:59.424230 1499 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765631459423737135 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 13:11:05 addons-685870 kubelet[1499]: I1213 13:11:05.127063 1499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpvmm\" (UniqueName: \"kubernetes.io/projected/029b48e2-9bbb-4c18-8728-e55e820b6f1e-kube-api-access-qpvmm\") pod \"hello-world-app-5d498dc89-mvsgp\" (UID: \"029b48e2-9bbb-4c18-8728-e55e820b6f1e\") " pod="default/hello-world-app-5d498dc89-mvsgp"
==> storage-provisioner [24e857c13b18225a46dc09a8a369409eba68c7bc0f71370e487890c74c2f44da] <==
W1213 13:10:41.489025 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 13:10:43.492633 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 13:10:43.498269 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 13:10:45.501762 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 13:10:45.508868 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 13:10:47.511758 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 13:10:47.516456 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 13:10:49.520592 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 13:10:49.526358 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 13:10:51.530145 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 13:10:51.535424 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 13:10:53.539206 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 13:10:53.547111 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 13:10:55.551298 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 13:10:55.556893 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 13:10:57.559935 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 13:10:57.568741 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 13:10:59.572093 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 13:10:59.577071 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 13:11:01.580399 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 13:11:01.587903 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 13:11:03.590863 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 13:11:03.596956 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 13:11:05.601783 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 13:11:05.612164 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-685870 -n addons-685870
helpers_test.go:270: (dbg) Run: kubectl --context addons-685870 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-mvsgp ingress-nginx-admission-create-fj4wb ingress-nginx-admission-patch-df6ws
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run: kubectl --context addons-685870 describe pod hello-world-app-5d498dc89-mvsgp ingress-nginx-admission-create-fj4wb ingress-nginx-admission-patch-df6ws
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-685870 describe pod hello-world-app-5d498dc89-mvsgp ingress-nginx-admission-create-fj4wb ingress-nginx-admission-patch-df6ws: exit status 1 (75.894024ms)
-- stdout --
Name: hello-world-app-5d498dc89-mvsgp
Namespace: default
Priority: 0
Service Account: default
Node: addons-685870/192.168.39.155
Start Time: Sat, 13 Dec 2025 13:11:05 +0000
Labels: app=hello-world-app
pod-template-hash=5d498dc89
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-world-app-5d498dc89
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qpvmm (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-qpvmm:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2s default-scheduler Successfully assigned default/hello-world-app-5d498dc89-mvsgp to addons-685870
Normal Pulling 2s kubelet Pulling image "docker.io/kicbase/echo-server:1.0"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-fj4wb" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-df6ws" not found
** /stderr **
helpers_test.go:288: kubectl --context addons-685870 describe pod hello-world-app-5d498dc89-mvsgp ingress-nginx-admission-create-fj4wb ingress-nginx-admission-patch-df6ws: exit status 1
addons_test.go:1055: (dbg) Run: out/minikube-linux-amd64 -p addons-685870 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-685870 addons disable ingress-dns --alsologtostderr -v=1: (1.107631358s)
addons_test.go:1055: (dbg) Run: out/minikube-linux-amd64 -p addons-685870 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-685870 addons disable ingress --alsologtostderr -v=1: (7.845687684s)
--- FAIL: TestAddons/parallel/Ingress (157.55s)