=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run: kubectl --context addons-347541 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run: kubectl --context addons-347541 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run: kubectl --context addons-347541 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [28bd2e4c-a606-45ae-bff8-93cc740702b2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [28bd2e4c-a606-45ae-bff8-93cc740702b2] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.006005749s
I1212 19:33:05.320103 139995 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run: out/minikube-linux-amd64 -p addons-347541 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-347541 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m16.250094556s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run: kubectl --context addons-347541 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run: out/minikube-linux-amd64 -p addons-347541 ip
addons_test.go:301: (dbg) Run: nslookup hello-john.test 192.168.39.202
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-347541 -n addons-347541
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-amd64 -p addons-347541 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-347541 logs -n 25: (1.085119922s)
helpers_test.go:261: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ delete │ -p download-only-167722 │ download-only-167722 │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │ 12 Dec 25 19:29 UTC │
│ start │ --download-only -p binary-mirror-604879 --alsologtostderr --binary-mirror http://127.0.0.1:35119 --driver=kvm2 --container-runtime=crio │ binary-mirror-604879 │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │ │
│ delete │ -p binary-mirror-604879 │ binary-mirror-604879 │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │ 12 Dec 25 19:29 UTC │
│ addons │ disable dashboard -p addons-347541 │ addons-347541 │ jenkins │ v1.37.0 │ 12 Dec 25 19:30 UTC │ │
│ addons │ enable dashboard -p addons-347541 │ addons-347541 │ jenkins │ v1.37.0 │ 12 Dec 25 19:30 UTC │ │
│ start │ -p addons-347541 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-347541 │ jenkins │ v1.37.0 │ 12 Dec 25 19:30 UTC │ 12 Dec 25 19:32 UTC │
│ addons │ addons-347541 addons disable volcano --alsologtostderr -v=1 │ addons-347541 │ jenkins │ v1.37.0 │ 12 Dec 25 19:32 UTC │ 12 Dec 25 19:32 UTC │
│ addons │ addons-347541 addons disable gcp-auth --alsologtostderr -v=1 │ addons-347541 │ jenkins │ v1.37.0 │ 12 Dec 25 19:32 UTC │ 12 Dec 25 19:32 UTC │
│ addons │ enable headlamp -p addons-347541 --alsologtostderr -v=1 │ addons-347541 │ jenkins │ v1.37.0 │ 12 Dec 25 19:32 UTC │ 12 Dec 25 19:32 UTC │
│ addons │ addons-347541 addons disable yakd --alsologtostderr -v=1 │ addons-347541 │ jenkins │ v1.37.0 │ 12 Dec 25 19:32 UTC │ 12 Dec 25 19:32 UTC │
│ addons │ addons-347541 addons disable metrics-server --alsologtostderr -v=1 │ addons-347541 │ jenkins │ v1.37.0 │ 12 Dec 25 19:32 UTC │ 12 Dec 25 19:32 UTC │
│ addons │ addons-347541 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-347541 │ jenkins │ v1.37.0 │ 12 Dec 25 19:32 UTC │ 12 Dec 25 19:32 UTC │
│ addons │ addons-347541 addons disable headlamp --alsologtostderr -v=1 │ addons-347541 │ jenkins │ v1.37.0 │ 12 Dec 25 19:32 UTC │ 12 Dec 25 19:32 UTC │
│ ip │ addons-347541 ip │ addons-347541 │ jenkins │ v1.37.0 │ 12 Dec 25 19:32 UTC │ 12 Dec 25 19:32 UTC │
│ addons │ addons-347541 addons disable registry --alsologtostderr -v=1 │ addons-347541 │ jenkins │ v1.37.0 │ 12 Dec 25 19:32 UTC │ 12 Dec 25 19:32 UTC │
│ addons │ addons-347541 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-347541 │ jenkins │ v1.37.0 │ 12 Dec 25 19:32 UTC │ 12 Dec 25 19:32 UTC │
│ addons │ addons-347541 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-347541 │ jenkins │ v1.37.0 │ 12 Dec 25 19:33 UTC │ 12 Dec 25 19:33 UTC │
│ ssh │ addons-347541 ssh cat /opt/local-path-provisioner/pvc-c45c01d7-a7ea-4447-bcca-5299d5d7b030_default_test-pvc/file1 │ addons-347541 │ jenkins │ v1.37.0 │ 12 Dec 25 19:33 UTC │ 12 Dec 25 19:33 UTC │
│ addons │ addons-347541 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-347541 │ jenkins │ v1.37.0 │ 12 Dec 25 19:33 UTC │ 12 Dec 25 19:33 UTC │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-347541 │ addons-347541 │ jenkins │ v1.37.0 │ 12 Dec 25 19:33 UTC │ 12 Dec 25 19:33 UTC │
│ ssh │ addons-347541 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-347541 │ jenkins │ v1.37.0 │ 12 Dec 25 19:33 UTC │ │
│ addons │ addons-347541 addons disable registry-creds --alsologtostderr -v=1 │ addons-347541 │ jenkins │ v1.37.0 │ 12 Dec 25 19:33 UTC │ 12 Dec 25 19:33 UTC │
│ addons │ addons-347541 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-347541 │ jenkins │ v1.37.0 │ 12 Dec 25 19:33 UTC │ 12 Dec 25 19:33 UTC │
│ addons │ addons-347541 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-347541 │ jenkins │ v1.37.0 │ 12 Dec 25 19:33 UTC │ 12 Dec 25 19:33 UTC │
│ ip │ addons-347541 ip │ addons-347541 │ jenkins │ v1.37.0 │ 12 Dec 25 19:35 UTC │ 12 Dec 25 19:35 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/12 19:30:00
Running on machine: ubuntu-20-agent-10
Binary: Built with gc go1.25.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1212 19:30:00.668485 140968 out.go:360] Setting OutFile to fd 1 ...
I1212 19:30:00.668763 140968 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:30:00.668773 140968 out.go:374] Setting ErrFile to fd 2...
I1212 19:30:00.668780 140968 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:30:00.668997 140968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
I1212 19:30:00.669563 140968 out.go:368] Setting JSON to false
I1212 19:30:00.670427 140968 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4341,"bootTime":1765563460,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1212 19:30:00.670487 140968 start.go:143] virtualization: kvm guest
I1212 19:30:00.672071 140968 out.go:179] * [addons-347541] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1212 19:30:00.673116 140968 out.go:179] - MINIKUBE_LOCATION=22112
I1212 19:30:00.673149 140968 notify.go:221] Checking for updates...
I1212 19:30:00.675285 140968 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1212 19:30:00.676263 140968 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22112-135957/kubeconfig
I1212 19:30:00.677190 140968 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-135957/.minikube
I1212 19:30:00.678126 140968 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1212 19:30:00.678976 140968 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1212 19:30:00.680046 140968 driver.go:422] Setting default libvirt URI to qemu:///system
I1212 19:30:00.711486 140968 out.go:179] * Using the kvm2 driver based on user configuration
I1212 19:30:00.712384 140968 start.go:309] selected driver: kvm2
I1212 19:30:00.712399 140968 start.go:927] validating driver "kvm2" against <nil>
I1212 19:30:00.712415 140968 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1212 19:30:00.713102 140968 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1212 19:30:00.713378 140968 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1212 19:30:00.713409 140968 cni.go:84] Creating CNI manager for ""
I1212 19:30:00.713462 140968 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1212 19:30:00.713474 140968 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1212 19:30:00.713537 140968 start.go:353] cluster config:
{Name:addons-347541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-347541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
I1212 19:30:00.713666 140968 iso.go:125] acquiring lock: {Name:mka604e7c5a779b48764eb6b2b4a8a1c6683346a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1212 19:30:00.715408 140968 out.go:179] * Starting "addons-347541" primary control-plane node in "addons-347541" cluster
I1212 19:30:00.716457 140968 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1212 19:30:00.716486 140968 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-135957/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
I1212 19:30:00.716508 140968 cache.go:65] Caching tarball of preloaded images
I1212 19:30:00.716585 140968 preload.go:238] Found /home/jenkins/minikube-integration/22112-135957/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1212 19:30:00.716596 140968 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
I1212 19:30:00.716890 140968 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/config.json ...
I1212 19:30:00.716909 140968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/config.json: {Name:mk7b29990bece5ef9fb6739e4abf70fe5f6174b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 19:30:00.717047 140968 start.go:360] acquireMachinesLock for addons-347541: {Name:mk1985c179f459a7b1b82780fe7717dfacfba5d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1212 19:30:00.717551 140968 start.go:364] duration metric: took 489.279µs to acquireMachinesLock for "addons-347541"
I1212 19:30:00.717597 140968 start.go:93] Provisioning new machine with config: &{Name:addons-347541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-347541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1212 19:30:00.717652 140968 start.go:125] createHost starting for "" (driver="kvm2")
I1212 19:30:00.718911 140968 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1212 19:30:00.719098 140968 start.go:159] libmachine.API.Create for "addons-347541" (driver="kvm2")
I1212 19:30:00.719148 140968 client.go:173] LocalClient.Create starting
I1212 19:30:00.719227 140968 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem
I1212 19:30:00.905026 140968 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/cert.pem
I1212 19:30:01.068487 140968 main.go:143] libmachine: creating domain...
I1212 19:30:01.068509 140968 main.go:143] libmachine: creating network...
I1212 19:30:01.069862 140968 main.go:143] libmachine: found existing default network
I1212 19:30:01.070154 140968 main.go:143] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1212 19:30:01.071312 140968 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001975d50}
I1212 19:30:01.071422 140968 main.go:143] libmachine: defining private network:
<network>
<name>mk-addons-347541</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1212 19:30:01.077187 140968 main.go:143] libmachine: creating private network mk-addons-347541 192.168.39.0/24...
I1212 19:30:01.142955 140968 main.go:143] libmachine: private network mk-addons-347541 192.168.39.0/24 created
I1212 19:30:01.143260 140968 main.go:143] libmachine: <network>
<name>mk-addons-347541</name>
<uuid>d48b8b9e-0d8d-48e3-b817-290b59763518</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:90:63:48'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1212 19:30:01.143297 140968 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541 ...
I1212 19:30:01.143320 140968 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22112-135957/.minikube/cache/iso/amd64/minikube-v1.37.0-1765505725-22112-amd64.iso
I1212 19:30:01.143331 140968 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22112-135957/.minikube
I1212 19:30:01.143398 140968 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22112-135957/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22112-135957/.minikube/cache/iso/amd64/minikube-v1.37.0-1765505725-22112-amd64.iso...
I1212 19:30:01.446361 140968 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa...
I1212 19:30:01.631234 140968 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/addons-347541.rawdisk...
I1212 19:30:01.631286 140968 main.go:143] libmachine: Writing magic tar header
I1212 19:30:01.631338 140968 main.go:143] libmachine: Writing SSH key tar header
I1212 19:30:01.631425 140968 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541 ...
I1212 19:30:01.631485 140968 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541
I1212 19:30:01.631532 140968 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541 (perms=drwx------)
I1212 19:30:01.631552 140968 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22112-135957/.minikube/machines
I1212 19:30:01.631564 140968 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22112-135957/.minikube/machines (perms=drwxr-xr-x)
I1212 19:30:01.631576 140968 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22112-135957/.minikube
I1212 19:30:01.631587 140968 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22112-135957/.minikube (perms=drwxr-xr-x)
I1212 19:30:01.631597 140968 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22112-135957
I1212 19:30:01.631612 140968 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22112-135957 (perms=drwxrwxr-x)
I1212 19:30:01.631625 140968 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1212 19:30:01.631634 140968 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1212 19:30:01.631644 140968 main.go:143] libmachine: checking permissions on dir: /home/jenkins
I1212 19:30:01.631652 140968 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1212 19:30:01.631663 140968 main.go:143] libmachine: checking permissions on dir: /home
I1212 19:30:01.631670 140968 main.go:143] libmachine: skipping /home - not owner
I1212 19:30:01.631674 140968 main.go:143] libmachine: defining domain...
I1212 19:30:01.633057 140968 main.go:143] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-347541</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/addons-347541.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-347541'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1212 19:30:01.640422 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:e1:43:7d in network default
I1212 19:30:01.641086 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:01.641119 140968 main.go:143] libmachine: starting domain...
I1212 19:30:01.641138 140968 main.go:143] libmachine: ensuring networks are active...
I1212 19:30:01.641958 140968 main.go:143] libmachine: Ensuring network default is active
I1212 19:30:01.642432 140968 main.go:143] libmachine: Ensuring network mk-addons-347541 is active
I1212 19:30:01.643370 140968 main.go:143] libmachine: getting domain XML...
I1212 19:30:01.644527 140968 main.go:143] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-347541</name>
<uuid>b1fb684f-da1f-4675-9f4b-aa96973add54</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/addons-347541.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:a9:57:3c'/>
<source network='mk-addons-347541'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:e1:43:7d'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1212 19:30:02.947936 140968 main.go:143] libmachine: waiting for domain to start...
I1212 19:30:02.949380 140968 main.go:143] libmachine: domain is now running
I1212 19:30:02.949407 140968 main.go:143] libmachine: waiting for IP...
I1212 19:30:02.950285 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:02.950778 140968 main.go:143] libmachine: no network interface addresses found for domain addons-347541 (source=lease)
I1212 19:30:02.950801 140968 main.go:143] libmachine: trying to list again with source=arp
I1212 19:30:02.951063 140968 main.go:143] libmachine: unable to find current IP address of domain addons-347541 in network mk-addons-347541 (interfaces detected: [])
I1212 19:30:02.951166 140968 retry.go:31] will retry after 298.326026ms: waiting for domain to come up
I1212 19:30:03.250759 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:03.251277 140968 main.go:143] libmachine: no network interface addresses found for domain addons-347541 (source=lease)
I1212 19:30:03.251294 140968 main.go:143] libmachine: trying to list again with source=arp
I1212 19:30:03.251572 140968 main.go:143] libmachine: unable to find current IP address of domain addons-347541 in network mk-addons-347541 (interfaces detected: [])
I1212 19:30:03.251615 140968 retry.go:31] will retry after 259.086026ms: waiting for domain to come up
I1212 19:30:03.512156 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:03.512724 140968 main.go:143] libmachine: no network interface addresses found for domain addons-347541 (source=lease)
I1212 19:30:03.512746 140968 main.go:143] libmachine: trying to list again with source=arp
I1212 19:30:03.513042 140968 main.go:143] libmachine: unable to find current IP address of domain addons-347541 in network mk-addons-347541 (interfaces detected: [])
I1212 19:30:03.513081 140968 retry.go:31] will retry after 460.175214ms: waiting for domain to come up
I1212 19:30:03.974664 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:03.975165 140968 main.go:143] libmachine: no network interface addresses found for domain addons-347541 (source=lease)
I1212 19:30:03.975184 140968 main.go:143] libmachine: trying to list again with source=arp
I1212 19:30:03.975533 140968 main.go:143] libmachine: unable to find current IP address of domain addons-347541 in network mk-addons-347541 (interfaces detected: [])
I1212 19:30:03.975568 140968 retry.go:31] will retry after 478.456546ms: waiting for domain to come up
I1212 19:30:04.455201 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:04.455741 140968 main.go:143] libmachine: no network interface addresses found for domain addons-347541 (source=lease)
I1212 19:30:04.455759 140968 main.go:143] libmachine: trying to list again with source=arp
I1212 19:30:04.456016 140968 main.go:143] libmachine: unable to find current IP address of domain addons-347541 in network mk-addons-347541 (interfaces detected: [])
I1212 19:30:04.456060 140968 retry.go:31] will retry after 486.30307ms: waiting for domain to come up
I1212 19:30:04.943756 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:04.944287 140968 main.go:143] libmachine: no network interface addresses found for domain addons-347541 (source=lease)
I1212 19:30:04.944304 140968 main.go:143] libmachine: trying to list again with source=arp
I1212 19:30:04.944556 140968 main.go:143] libmachine: unable to find current IP address of domain addons-347541 in network mk-addons-347541 (interfaces detected: [])
I1212 19:30:04.944590 140968 retry.go:31] will retry after 848.999206ms: waiting for domain to come up
I1212 19:30:05.795770 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:05.796357 140968 main.go:143] libmachine: no network interface addresses found for domain addons-347541 (source=lease)
I1212 19:30:05.796376 140968 main.go:143] libmachine: trying to list again with source=arp
I1212 19:30:05.796673 140968 main.go:143] libmachine: unable to find current IP address of domain addons-347541 in network mk-addons-347541 (interfaces detected: [])
I1212 19:30:05.796708 140968 retry.go:31] will retry after 845.582774ms: waiting for domain to come up
I1212 19:30:06.644411 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:06.644945 140968 main.go:143] libmachine: no network interface addresses found for domain addons-347541 (source=lease)
I1212 19:30:06.644966 140968 main.go:143] libmachine: trying to list again with source=arp
I1212 19:30:06.645286 140968 main.go:143] libmachine: unable to find current IP address of domain addons-347541 in network mk-addons-347541 (interfaces detected: [])
I1212 19:30:06.645326 140968 retry.go:31] will retry after 1.081306031s: waiting for domain to come up
I1212 19:30:07.728673 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:07.729177 140968 main.go:143] libmachine: no network interface addresses found for domain addons-347541 (source=lease)
I1212 19:30:07.729193 140968 main.go:143] libmachine: trying to list again with source=arp
I1212 19:30:07.729452 140968 main.go:143] libmachine: unable to find current IP address of domain addons-347541 in network mk-addons-347541 (interfaces detected: [])
I1212 19:30:07.729496 140968 retry.go:31] will retry after 1.620619119s: waiting for domain to come up
I1212 19:30:09.351356 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:09.351854 140968 main.go:143] libmachine: no network interface addresses found for domain addons-347541 (source=lease)
I1212 19:30:09.351872 140968 main.go:143] libmachine: trying to list again with source=arp
I1212 19:30:09.352157 140968 main.go:143] libmachine: unable to find current IP address of domain addons-347541 in network mk-addons-347541 (interfaces detected: [])
I1212 19:30:09.352201 140968 retry.go:31] will retry after 1.817980315s: waiting for domain to come up
I1212 19:30:11.171361 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:11.171930 140968 main.go:143] libmachine: no network interface addresses found for domain addons-347541 (source=lease)
I1212 19:30:11.171943 140968 main.go:143] libmachine: trying to list again with source=arp
I1212 19:30:11.172278 140968 main.go:143] libmachine: unable to find current IP address of domain addons-347541 in network mk-addons-347541 (interfaces detected: [])
I1212 19:30:11.172313 140968 retry.go:31] will retry after 2.176390828s: waiting for domain to come up
I1212 19:30:13.351471 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:13.351920 140968 main.go:143] libmachine: no network interface addresses found for domain addons-347541 (source=lease)
I1212 19:30:13.351936 140968 main.go:143] libmachine: trying to list again with source=arp
I1212 19:30:13.352208 140968 main.go:143] libmachine: unable to find current IP address of domain addons-347541 in network mk-addons-347541 (interfaces detected: [])
I1212 19:30:13.352242 140968 retry.go:31] will retry after 3.340610976s: waiting for domain to come up
I1212 19:30:16.694012 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:16.694521 140968 main.go:143] libmachine: domain addons-347541 has current primary IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:16.694537 140968 main.go:143] libmachine: found domain IP: 192.168.39.202
I1212 19:30:16.694558 140968 main.go:143] libmachine: reserving static IP address...
I1212 19:30:16.694894 140968 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-347541", mac: "52:54:00:a9:57:3c", ip: "192.168.39.202"} in network mk-addons-347541
I1212 19:30:16.877709 140968 main.go:143] libmachine: reserved static IP address 192.168.39.202 for domain addons-347541
I1212 19:30:16.877734 140968 main.go:143] libmachine: waiting for SSH...
I1212 19:30:16.877743 140968 main.go:143] libmachine: Getting to WaitForSSH function...
I1212 19:30:16.880346 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:16.880735 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:16.880764 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:16.881059 140968 main.go:143] libmachine: Using SSH client type: native
I1212 19:30:16.881404 140968 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.202 22 <nil> <nil>}
I1212 19:30:16.881418 140968 main.go:143] libmachine: About to run SSH command:
exit 0
I1212 19:30:16.994373 140968 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1212 19:30:16.994816 140968 main.go:143] libmachine: domain creation complete
I1212 19:30:16.996372 140968 machine.go:94] provisionDockerMachine start ...
I1212 19:30:16.998598 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:16.998992 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:16.999021 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:16.999174 140968 main.go:143] libmachine: Using SSH client type: native
I1212 19:30:16.999375 140968 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.202 22 <nil> <nil>}
I1212 19:30:16.999386 140968 main.go:143] libmachine: About to run SSH command:
hostname
I1212 19:30:17.112605 140968 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
I1212 19:30:17.112640 140968 buildroot.go:166] provisioning hostname "addons-347541"
I1212 19:30:17.115802 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:17.116191 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:17.116218 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:17.116461 140968 main.go:143] libmachine: Using SSH client type: native
I1212 19:30:17.116702 140968 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.202 22 <nil> <nil>}
I1212 19:30:17.116717 140968 main.go:143] libmachine: About to run SSH command:
sudo hostname addons-347541 && echo "addons-347541" | sudo tee /etc/hostname
I1212 19:30:17.252517 140968 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-347541
I1212 19:30:17.255158 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:17.255631 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:17.255661 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:17.255831 140968 main.go:143] libmachine: Using SSH client type: native
I1212 19:30:17.256099 140968 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.202 22 <nil> <nil>}
I1212 19:30:17.256142 140968 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-347541' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-347541/g' /etc/hosts;
else
echo '127.0.1.1 addons-347541' | sudo tee -a /etc/hosts;
fi
fi
I1212 19:30:17.381776 140968 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1212 19:30:17.381807 140968 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22112-135957/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-135957/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-135957/.minikube}
I1212 19:30:17.381849 140968 buildroot.go:174] setting up certificates
I1212 19:30:17.381861 140968 provision.go:84] configureAuth start
I1212 19:30:17.384712 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:17.385180 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:17.385205 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:17.387466 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:17.387751 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:17.387768 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:17.387888 140968 provision.go:143] copyHostCerts
I1212 19:30:17.387975 140968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-135957/.minikube/cert.pem (1123 bytes)
I1212 19:30:17.388092 140968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-135957/.minikube/key.pem (1675 bytes)
I1212 19:30:17.388164 140968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-135957/.minikube/ca.pem (1078 bytes)
I1212 19:30:17.389003 140968 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-135957/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca-key.pem org=jenkins.addons-347541 san=[127.0.0.1 192.168.39.202 addons-347541 localhost minikube]
I1212 19:30:17.605876 140968 provision.go:177] copyRemoteCerts
I1212 19:30:17.605950 140968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1212 19:30:17.608569 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:17.608928 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:17.608959 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:17.609125 140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
I1212 19:30:17.698479 140968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1212 19:30:17.728262 140968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1212 19:30:17.757028 140968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1212 19:30:17.787316 140968 provision.go:87] duration metric: took 405.415343ms to configureAuth
I1212 19:30:17.787351 140968 buildroot.go:189] setting minikube options for container-runtime
I1212 19:30:17.787557 140968 config.go:182] Loaded profile config "addons-347541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 19:30:17.790502 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:17.790905 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:17.790931 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:17.791159 140968 main.go:143] libmachine: Using SSH client type: native
I1212 19:30:17.791407 140968 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.202 22 <nil> <nil>}
I1212 19:30:17.791425 140968 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1212 19:30:18.324560 140968 main.go:143] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1212 19:30:18.324596 140968 machine.go:97] duration metric: took 1.328200526s to provisionDockerMachine
I1212 19:30:18.324609 140968 client.go:176] duration metric: took 17.605450349s to LocalClient.Create
I1212 19:30:18.324630 140968 start.go:167] duration metric: took 17.605532959s to libmachine.API.Create "addons-347541"
I1212 19:30:18.324665 140968 start.go:293] postStartSetup for "addons-347541" (driver="kvm2")
I1212 19:30:18.324683 140968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1212 19:30:18.324775 140968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1212 19:30:18.327501 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:18.327852 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:18.327871 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:18.327987 140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
I1212 19:30:18.415516 140968 ssh_runner.go:195] Run: cat /etc/os-release
I1212 19:30:18.419934 140968 info.go:137] Remote host: Buildroot 2025.02
I1212 19:30:18.419960 140968 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-135957/.minikube/addons for local assets ...
I1212 19:30:18.420044 140968 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-135957/.minikube/files for local assets ...
I1212 19:30:18.420080 140968 start.go:296] duration metric: took 95.403261ms for postStartSetup
I1212 19:30:18.423197 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:18.423594 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:18.423624 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:18.423854 140968 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/config.json ...
I1212 19:30:18.424056 140968 start.go:128] duration metric: took 17.706391968s to createHost
I1212 19:30:18.426148 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:18.426521 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:18.426550 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:18.426714 140968 main.go:143] libmachine: Using SSH client type: native
I1212 19:30:18.426937 140968 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.202 22 <nil> <nil>}
I1212 19:30:18.426950 140968 main.go:143] libmachine: About to run SSH command:
date +%s.%N
I1212 19:30:18.541163 140968 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765567818.501135341
I1212 19:30:18.541187 140968 fix.go:216] guest clock: 1765567818.501135341
I1212 19:30:18.541196 140968 fix.go:229] Guest: 2025-12-12 19:30:18.501135341 +0000 UTC Remote: 2025-12-12 19:30:18.424088593 +0000 UTC m=+17.805219101 (delta=77.046748ms)
I1212 19:30:18.541222 140968 fix.go:200] guest clock delta is within tolerance: 77.046748ms
I1212 19:30:18.541229 140968 start.go:83] releasing machines lock for "addons-347541", held for 17.823662986s
I1212 19:30:18.543967 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:18.544357 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:18.544381 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:18.544911 140968 ssh_runner.go:195] Run: cat /version.json
I1212 19:30:18.544985 140968 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1212 19:30:18.548144 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:18.548260 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:18.548589 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:18.548656 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:18.548684 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:18.548721 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:18.548882 140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
I1212 19:30:18.549023 140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
I1212 19:30:18.657618 140968 ssh_runner.go:195] Run: systemctl --version
I1212 19:30:18.663841 140968 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1212 19:30:18.817799 140968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1212 19:30:18.825475 140968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1212 19:30:18.825553 140968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1212 19:30:18.845765 140968 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1212 19:30:18.845791 140968 start.go:496] detecting cgroup driver to use...
I1212 19:30:18.845876 140968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1212 19:30:18.867508 140968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1212 19:30:18.884489 140968 docker.go:218] disabling cri-docker service (if available) ...
I1212 19:30:18.884573 140968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1212 19:30:18.902820 140968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1212 19:30:18.919515 140968 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1212 19:30:19.072250 140968 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1212 19:30:19.289398 140968 docker.go:234] disabling docker service ...
I1212 19:30:19.289463 140968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1212 19:30:19.305224 140968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1212 19:30:19.319946 140968 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1212 19:30:19.483025 140968 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1212 19:30:19.628651 140968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1212 19:30:19.644709 140968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1212 19:30:19.666219 140968 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1212 19:30:19.666282 140968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1212 19:30:19.678574 140968 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1212 19:30:19.678637 140968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1212 19:30:19.689982 140968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1212 19:30:19.701617 140968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1212 19:30:19.713495 140968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1212 19:30:19.725720 140968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1212 19:30:19.737186 140968 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1212 19:30:19.757536 140968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1212 19:30:19.769399 140968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1212 19:30:19.779271 140968 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1212 19:30:19.779330 140968 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1212 19:30:19.800843 140968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1212 19:30:19.813428 140968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 19:30:19.959567 140968 ssh_runner.go:195] Run: sudo systemctl restart crio
I1212 19:30:20.066130 140968 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1212 19:30:20.066263 140968 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1212 19:30:20.071997 140968 start.go:564] Will wait 60s for crictl version
I1212 19:30:20.072081 140968 ssh_runner.go:195] Run: which crictl
I1212 19:30:20.076021 140968 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1212 19:30:20.110823 140968 start.go:580] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1212 19:30:20.110955 140968 ssh_runner.go:195] Run: crio --version
I1212 19:30:20.138662 140968 ssh_runner.go:195] Run: crio --version
I1212 19:30:20.168121 140968 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
I1212 19:30:20.172030 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:20.172398 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:20.172421 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:20.172607 140968 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1212 19:30:20.177235 140968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1212 19:30:20.192254 140968 kubeadm.go:884] updating cluster {Name:addons-347541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-347541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1212 19:30:20.192405 140968 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1212 19:30:20.192463 140968 ssh_runner.go:195] Run: sudo crictl images --output json
I1212 19:30:20.222073 140968 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
I1212 19:30:20.222180 140968 ssh_runner.go:195] Run: which lz4
I1212 19:30:20.226629 140968 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1212 19:30:20.231403 140968 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1212 19:30:20.231444 140968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
I1212 19:30:21.372447 140968 crio.go:462] duration metric: took 1.145870197s to copy over tarball
I1212 19:30:21.372575 140968 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1212 19:30:23.093577 140968 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.720959335s)
I1212 19:30:23.093613 140968 crio.go:469] duration metric: took 1.721123252s to extract the tarball
I1212 19:30:23.093622 140968 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1212 19:30:23.130289 140968 ssh_runner.go:195] Run: sudo crictl images --output json
I1212 19:30:23.169285 140968 crio.go:514] all images are preloaded for cri-o runtime.
I1212 19:30:23.169313 140968 cache_images.go:86] Images are preloaded, skipping loading
I1212 19:30:23.169324 140968 kubeadm.go:935] updating node { 192.168.39.202 8443 v1.34.2 crio true true} ...
I1212 19:30:23.169456 140968 kubeadm.go:947] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-347541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.202
[Install]
config:
{KubernetesVersion:v1.34.2 ClusterName:addons-347541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1212 19:30:23.169531 140968 ssh_runner.go:195] Run: crio config
I1212 19:30:23.216098 140968 cni.go:84] Creating CNI manager for ""
I1212 19:30:23.216134 140968 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1212 19:30:23.216157 140968 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1212 19:30:23.216196 140968 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.202 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-347541 NodeName:addons-347541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1212 19:30:23.216324 140968 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.202
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-347541"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.202"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.202"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1212 19:30:23.216387 140968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
I1212 19:30:23.228847 140968 binaries.go:51] Found k8s binaries, skipping transfer
I1212 19:30:23.228944 140968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1212 19:30:23.241485 140968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
I1212 19:30:23.262387 140968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1212 19:30:23.283470 140968 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
I1212 19:30:23.304411 140968 ssh_runner.go:195] Run: grep 192.168.39.202 control-plane.minikube.internal$ /etc/hosts
I1212 19:30:23.308528 140968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.202 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1212 19:30:23.323463 140968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 19:30:23.464842 140968 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1212 19:30:23.484934 140968 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541 for IP: 192.168.39.202
I1212 19:30:23.484985 140968 certs.go:195] generating shared ca certs ...
I1212 19:30:23.485011 140968 certs.go:227] acquiring lock for ca certs: {Name:mk856e15c7830c27b8e705838c72180e3414c0f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 19:30:23.485194 140968 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-135957/.minikube/ca.key
I1212 19:30:23.563998 140968 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-135957/.minikube/ca.crt ...
I1212 19:30:23.564032 140968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/ca.crt: {Name:mk18cfabcdb3a68d046e7a8c89c35160dc36f4ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 19:30:23.564819 140968 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-135957/.minikube/ca.key ...
I1212 19:30:23.564838 140968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/ca.key: {Name:mk47a607b7e1d4fe7cd7ac22805d30141927b16d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 19:30:23.565292 140968 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.key
I1212 19:30:23.617265 140968 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.crt ...
I1212 19:30:23.617302 140968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.crt: {Name:mk85dbc3c74242157ff9f330c6deabfc77aec2e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 19:30:23.618098 140968 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.key ...
I1212 19:30:23.618142 140968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.key: {Name:mkde9c4df31a46fde4189054105ffdc3f6362e6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 19:30:23.618304 140968 certs.go:257] generating profile certs ...
I1212 19:30:23.618370 140968 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.key
I1212 19:30:23.618397 140968 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt with IP's: []
I1212 19:30:23.771361 140968 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt ...
I1212 19:30:23.771400 140968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: {Name:mkabf5ff19b68483714d8347866512a978f4ba2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 19:30:23.771586 140968 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.key ...
I1212 19:30:23.771598 140968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.key: {Name:mk6047470d3c978be16f7b1d2eed436c1b281da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 19:30:23.772194 140968 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/apiserver.key.ed0663fe
I1212 19:30:23.772223 140968 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/apiserver.crt.ed0663fe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.202]
I1212 19:30:23.826537 140968 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/apiserver.crt.ed0663fe ...
I1212 19:30:23.826570 140968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/apiserver.crt.ed0663fe: {Name:mk263cdb39097ad588559d4bf43d83e7f753e8a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 19:30:23.826741 140968 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/apiserver.key.ed0663fe ...
I1212 19:30:23.826758 140968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/apiserver.key.ed0663fe: {Name:mk0a31db8094fd8f08b871bfe87ac103b9347e44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 19:30:23.827459 140968 certs.go:382] copying /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/apiserver.crt.ed0663fe -> /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/apiserver.crt
I1212 19:30:23.827544 140968 certs.go:386] copying /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/apiserver.key.ed0663fe -> /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/apiserver.key
I1212 19:30:23.827592 140968 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/proxy-client.key
I1212 19:30:23.827612 140968 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/proxy-client.crt with IP's: []
I1212 19:30:23.984075 140968 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/proxy-client.crt ...
I1212 19:30:23.984118 140968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/proxy-client.crt: {Name:mkd556fe950bdc660e1b7357de69d4068f78044e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 19:30:23.984337 140968 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/proxy-client.key ...
I1212 19:30:23.984355 140968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/proxy-client.key: {Name:mka17887116c1f5f6d129bb865f71a16e35db1da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 19:30:23.984572 140968 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca-key.pem (1675 bytes)
I1212 19:30:23.984617 140968 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem (1078 bytes)
I1212 19:30:23.984645 140968 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/cert.pem (1123 bytes)
I1212 19:30:23.984671 140968 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/key.pem (1675 bytes)
I1212 19:30:23.985337 140968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1212 19:30:24.015999 140968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1212 19:30:24.045956 140968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1212 19:30:24.075506 140968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1212 19:30:24.105700 140968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1212 19:30:24.136977 140968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1212 19:30:24.184635 140968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1212 19:30:24.222985 140968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1212 19:30:24.253538 140968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1212 19:30:24.283311 140968 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1212 19:30:24.304150 140968 ssh_runner.go:195] Run: openssl version
I1212 19:30:24.310534 140968 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1212 19:30:24.322534 140968 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1212 19:30:24.334552 140968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1212 19:30:24.339769 140968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:30 /usr/share/ca-certificates/minikubeCA.pem
I1212 19:30:24.339845 140968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1212 19:30:24.347228 140968 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1212 19:30:24.359091 140968 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1212 19:30:24.371148 140968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1212 19:30:24.375947 140968 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1212 19:30:24.376021 140968 kubeadm.go:401] StartCluster: {Name:addons-347541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-347541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1212 19:30:24.376094 140968 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1212 19:30:24.376188 140968 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1212 19:30:24.407349 140968 cri.go:89] found id: ""
I1212 19:30:24.407431 140968 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1212 19:30:24.419992 140968 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1212 19:30:24.432630 140968 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1212 19:30:24.444916 140968 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1212 19:30:24.444936 140968 kubeadm.go:158] found existing configuration files:
I1212 19:30:24.445000 140968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1212 19:30:24.456502 140968 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1212 19:30:24.456572 140968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1212 19:30:24.468621 140968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1212 19:30:24.479764 140968 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1212 19:30:24.479829 140968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1212 19:30:24.491684 140968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1212 19:30:24.502606 140968 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1212 19:30:24.502672 140968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1212 19:30:24.514307 140968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1212 19:30:24.524777 140968 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1212 19:30:24.524845 140968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1212 19:30:24.536512 140968 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1212 19:30:24.680901 140968 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1212 19:30:37.730772 140968 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
I1212 19:30:37.730831 140968 kubeadm.go:319] [preflight] Running pre-flight checks
I1212 19:30:37.730913 140968 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1212 19:30:37.731103 140968 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1212 19:30:37.731280 140968 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1212 19:30:37.731362 140968 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1212 19:30:37.732736 140968 out.go:252] - Generating certificates and keys ...
I1212 19:30:37.732837 140968 kubeadm.go:319] [certs] Using existing ca certificate authority
I1212 19:30:37.732924 140968 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1212 19:30:37.733033 140968 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1212 19:30:37.733138 140968 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1212 19:30:37.733230 140968 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1212 19:30:37.733346 140968 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1212 19:30:37.733446 140968 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1212 19:30:37.733612 140968 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-347541 localhost] and IPs [192.168.39.202 127.0.0.1 ::1]
I1212 19:30:37.733684 140968 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1212 19:30:37.733858 140968 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-347541 localhost] and IPs [192.168.39.202 127.0.0.1 ::1]
I1212 19:30:37.733950 140968 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1212 19:30:37.734038 140968 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1212 19:30:37.734103 140968 kubeadm.go:319] [certs] Generating "sa" key and public key
I1212 19:30:37.734210 140968 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1212 19:30:37.734291 140968 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1212 19:30:37.734368 140968 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1212 19:30:37.734448 140968 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1212 19:30:37.734534 140968 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1212 19:30:37.734625 140968 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1212 19:30:37.734729 140968 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1212 19:30:37.734823 140968 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1212 19:30:37.736341 140968 out.go:252] - Booting up control plane ...
I1212 19:30:37.736471 140968 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1212 19:30:37.736603 140968 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1212 19:30:37.736661 140968 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1212 19:30:37.736761 140968 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1212 19:30:37.736838 140968 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1212 19:30:37.736921 140968 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1212 19:30:37.736988 140968 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1212 19:30:37.737020 140968 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1212 19:30:37.737180 140968 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1212 19:30:37.737267 140968 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1212 19:30:37.737319 140968 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501137122s
I1212 19:30:37.737394 140968 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1212 19:30:37.737463 140968 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.202:8443/livez
I1212 19:30:37.737541 140968 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1212 19:30:37.737608 140968 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1212 19:30:37.737703 140968 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.669943413s
I1212 19:30:37.737823 140968 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.710932375s
I1212 19:30:37.737921 140968 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502318695s
I1212 19:30:37.738042 140968 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1212 19:30:37.738222 140968 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1212 19:30:37.738306 140968 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1212 19:30:37.738499 140968 kubeadm.go:319] [mark-control-plane] Marking the node addons-347541 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1212 19:30:37.738561 140968 kubeadm.go:319] [bootstrap-token] Using token: 5xyxrx.8cc9hzhgxpkclftb
I1212 19:30:37.740549 140968 out.go:252] - Configuring RBAC rules ...
I1212 19:30:37.740668 140968 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1212 19:30:37.740760 140968 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1212 19:30:37.740931 140968 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1212 19:30:37.741079 140968 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1212 19:30:37.741247 140968 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1212 19:30:37.741368 140968 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1212 19:30:37.741508 140968 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1212 19:30:37.741570 140968 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1212 19:30:37.741637 140968 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1212 19:30:37.741649 140968 kubeadm.go:319]
I1212 19:30:37.741699 140968 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1212 19:30:37.741709 140968 kubeadm.go:319]
I1212 19:30:37.741767 140968 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1212 19:30:37.741773 140968 kubeadm.go:319]
I1212 19:30:37.741793 140968 kubeadm.go:319] mkdir -p $HOME/.kube
I1212 19:30:37.741840 140968 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1212 19:30:37.741894 140968 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1212 19:30:37.741902 140968 kubeadm.go:319]
I1212 19:30:37.741984 140968 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1212 19:30:37.741992 140968 kubeadm.go:319]
I1212 19:30:37.742056 140968 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1212 19:30:37.742065 140968 kubeadm.go:319]
I1212 19:30:37.742149 140968 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1212 19:30:37.742254 140968 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1212 19:30:37.742348 140968 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1212 19:30:37.742361 140968 kubeadm.go:319]
I1212 19:30:37.742461 140968 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1212 19:30:37.742563 140968 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1212 19:30:37.742572 140968 kubeadm.go:319]
I1212 19:30:37.742673 140968 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 5xyxrx.8cc9hzhgxpkclftb \
I1212 19:30:37.742802 140968 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:2a055c2f74563dd017e9ed55ed932d3460a1f443e96894092fdaf892a84e9a9a \
I1212 19:30:37.742833 140968 kubeadm.go:319] --control-plane
I1212 19:30:37.742843 140968 kubeadm.go:319]
I1212 19:30:37.742941 140968 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1212 19:30:37.742948 140968 kubeadm.go:319]
I1212 19:30:37.743046 140968 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 5xyxrx.8cc9hzhgxpkclftb \
I1212 19:30:37.743219 140968 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:2a055c2f74563dd017e9ed55ed932d3460a1f443e96894092fdaf892a84e9a9a
I1212 19:30:37.743242 140968 cni.go:84] Creating CNI manager for ""
I1212 19:30:37.743250 140968 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1212 19:30:37.744679 140968 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1212 19:30:37.745792 140968 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1212 19:30:37.759539 140968 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1212 19:30:37.786435 140968 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1212 19:30:37.786520 140968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1212 19:30:37.786548 140968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-347541 minikube.k8s.io/updated_at=2025_12_12T19_30_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300 minikube.k8s.io/name=addons-347541 minikube.k8s.io/primary=true
I1212 19:30:37.829814 140968 ops.go:34] apiserver oom_adj: -16
I1212 19:30:37.915474 140968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1212 19:30:38.416395 140968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1212 19:30:38.916407 140968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1212 19:30:39.416473 140968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1212 19:30:39.916258 140968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1212 19:30:40.416215 140968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1212 19:30:40.916138 140968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1212 19:30:41.416318 140968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1212 19:30:41.916089 140968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1212 19:30:42.037154 140968 kubeadm.go:1114] duration metric: took 4.250701029s to wait for elevateKubeSystemPrivileges
I1212 19:30:42.037242 140968 kubeadm.go:403] duration metric: took 17.661224703s to StartCluster
I1212 19:30:42.037273 140968 settings.go:142] acquiring lock: {Name:mk2e3b99c7ed93165698abc6c533d079febb6d28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 19:30:42.037478 140968 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22112-135957/kubeconfig
I1212 19:30:42.038072 140968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/kubeconfig: {Name:mkab6c8db323de95c4a5daef1e17fdaffcd571ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 19:30:42.038369 140968 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1212 19:30:42.038422 140968 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1212 19:30:42.038461 140968 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1212 19:30:42.038596 140968 addons.go:70] Setting yakd=true in profile "addons-347541"
I1212 19:30:42.038616 140968 addons.go:70] Setting inspektor-gadget=true in profile "addons-347541"
I1212 19:30:42.038630 140968 addons.go:70] Setting storage-provisioner=true in profile "addons-347541"
I1212 19:30:42.038640 140968 addons.go:239] Setting addon storage-provisioner=true in "addons-347541"
I1212 19:30:42.038643 140968 addons.go:239] Setting addon inspektor-gadget=true in "addons-347541"
I1212 19:30:42.038662 140968 addons.go:70] Setting registry-creds=true in profile "addons-347541"
I1212 19:30:42.038674 140968 config.go:182] Loaded profile config "addons-347541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 19:30:42.038689 140968 host.go:66] Checking if "addons-347541" exists ...
I1212 19:30:42.038696 140968 host.go:66] Checking if "addons-347541" exists ...
I1212 19:30:42.038707 140968 addons.go:70] Setting metrics-server=true in profile "addons-347541"
I1212 19:30:42.038720 140968 addons.go:239] Setting addon metrics-server=true in "addons-347541"
I1212 19:30:42.038732 140968 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-347541"
I1212 19:30:42.038748 140968 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-347541"
I1212 19:30:42.038765 140968 host.go:66] Checking if "addons-347541" exists ...
I1212 19:30:42.038754 140968 addons.go:70] Setting default-storageclass=true in profile "addons-347541"
I1212 19:30:42.038771 140968 addons.go:70] Setting volcano=true in profile "addons-347541"
I1212 19:30:42.038794 140968 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-347541"
I1212 19:30:42.038797 140968 addons.go:239] Setting addon volcano=true in "addons-347541"
I1212 19:30:42.038826 140968 host.go:66] Checking if "addons-347541" exists ...
I1212 19:30:42.039183 140968 addons.go:70] Setting cloud-spanner=true in profile "addons-347541"
I1212 19:30:42.039207 140968 addons.go:239] Setting addon cloud-spanner=true in "addons-347541"
I1212 19:30:42.039232 140968 host.go:66] Checking if "addons-347541" exists ...
I1212 19:30:42.039367 140968 addons.go:70] Setting ingress=true in profile "addons-347541"
I1212 19:30:42.039383 140968 addons.go:239] Setting addon ingress=true in "addons-347541"
I1212 19:30:42.039425 140968 host.go:66] Checking if "addons-347541" exists ...
I1212 19:30:42.039982 140968 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-347541"
I1212 19:30:42.040034 140968 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-347541"
I1212 19:30:42.040064 140968 host.go:66] Checking if "addons-347541" exists ...
I1212 19:30:42.040138 140968 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-347541"
I1212 19:30:42.040155 140968 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-347541"
I1212 19:30:42.040176 140968 host.go:66] Checking if "addons-347541" exists ...
I1212 19:30:42.040292 140968 addons.go:70] Setting ingress-dns=true in profile "addons-347541"
I1212 19:30:42.040310 140968 addons.go:239] Setting addon ingress-dns=true in "addons-347541"
I1212 19:30:42.040349 140968 host.go:66] Checking if "addons-347541" exists ...
I1212 19:30:42.038622 140968 addons.go:239] Setting addon yakd=true in "addons-347541"
I1212 19:30:42.040396 140968 host.go:66] Checking if "addons-347541" exists ...
I1212 19:30:42.038698 140968 addons.go:239] Setting addon registry-creds=true in "addons-347541"
I1212 19:30:42.040715 140968 addons.go:70] Setting gcp-auth=true in profile "addons-347541"
I1212 19:30:42.040720 140968 host.go:66] Checking if "addons-347541" exists ...
I1212 19:30:42.040764 140968 mustload.go:66] Loading cluster: addons-347541
I1212 19:30:42.040994 140968 config.go:182] Loaded profile config "addons-347541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 19:30:42.041017 140968 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-347541"
I1212 19:30:42.041068 140968 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-347541"
I1212 19:30:42.041103 140968 host.go:66] Checking if "addons-347541" exists ...
I1212 19:30:42.041156 140968 addons.go:70] Setting registry=true in profile "addons-347541"
I1212 19:30:42.041170 140968 addons.go:239] Setting addon registry=true in "addons-347541"
I1212 19:30:42.041188 140968 host.go:66] Checking if "addons-347541" exists ...
I1212 19:30:42.041365 140968 out.go:179] * Verifying Kubernetes components...
I1212 19:30:42.041420 140968 addons.go:70] Setting volumesnapshots=true in profile "addons-347541"
I1212 19:30:42.041443 140968 addons.go:239] Setting addon volumesnapshots=true in "addons-347541"
I1212 19:30:42.041474 140968 host.go:66] Checking if "addons-347541" exists ...
I1212 19:30:42.042606 140968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
W1212 19:30:42.045206 140968 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1212 19:30:42.047241 140968 addons.go:239] Setting addon default-storageclass=true in "addons-347541"
I1212 19:30:42.047275 140968 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-347541"
I1212 19:30:42.047282 140968 host.go:66] Checking if "addons-347541" exists ...
I1212 19:30:42.047314 140968 host.go:66] Checking if "addons-347541" exists ...
I1212 19:30:42.047514 140968 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1212 19:30:42.048639 140968 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1212 19:30:42.048663 140968 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
I1212 19:30:42.048665 140968 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1212 19:30:42.048766 140968 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1212 19:30:42.048689 140968 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1212 19:30:42.048642 140968 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
I1212 19:30:42.050304 140968 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1212 19:30:42.050326 140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1212 19:30:42.050424 140968 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1212 19:30:42.050442 140968 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.5
I1212 19:30:42.050417 140968 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
I1212 19:30:42.050561 140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1212 19:30:42.050308 140968 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
I1212 19:30:42.051006 140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
I1212 19:30:42.050453 140968 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1212 19:30:42.050465 140968 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
I1212 19:30:42.050975 140968 host.go:66] Checking if "addons-347541" exists ...
I1212 19:30:42.051938 140968 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1212 19:30:42.051958 140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1212 19:30:42.053507 140968 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
I1212 19:30:42.053529 140968 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1212 19:30:42.053529 140968 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
I1212 19:30:42.053551 140968 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1212 19:30:42.053997 140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1212 19:30:42.053560 140968 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1212 19:30:42.053670 140968 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1212 19:30:42.054325 140968 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1212 19:30:42.054243 140968 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
I1212 19:30:42.054243 140968 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1212 19:30:42.054246 140968 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1212 19:30:42.054996 140968 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1212 19:30:42.055087 140968 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1212 19:30:42.055374 140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1212 19:30:42.055686 140968 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1212 19:30:42.055704 140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1212 19:30:42.056299 140968 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1212 19:30:42.056380 140968 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1212 19:30:42.056400 140968 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1212 19:30:42.056387 140968 out.go:179] - Using image docker.io/registry:3.0.0
I1212 19:30:42.056471 140968 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1212 19:30:42.058159 140968 out.go:179] - Using image docker.io/busybox:stable
I1212 19:30:42.058180 140968 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1212 19:30:42.058259 140968 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
I1212 19:30:42.058448 140968 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1212 19:30:42.058473 140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1212 19:30:42.058447 140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1212 19:30:42.059434 140968 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1212 19:30:42.059463 140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1212 19:30:42.059965 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.060711 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.061003 140968 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1212 19:30:42.061354 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.061988 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:42.062024 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.062297 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:42.062332 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.062461 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.063150 140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
I1212 19:30:42.063323 140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
I1212 19:30:42.063357 140968 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1212 19:30:42.063444 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:42.063478 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.063578 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.064144 140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
I1212 19:30:42.065023 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:42.065065 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.065594 140968 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1212 19:30:42.065594 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.065670 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:42.065708 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.065850 140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
I1212 19:30:42.066536 140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
I1212 19:30:42.066737 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.067401 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:42.067415 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.067499 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.068014 140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
I1212 19:30:42.068060 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:42.068097 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.068459 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:42.068491 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.068505 140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
I1212 19:30:42.068610 140968 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1212 19:30:42.068864 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.068896 140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
I1212 19:30:42.069045 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.069049 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.069795 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:42.069827 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:42.069834 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.069863 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.069960 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:42.070060 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.070150 140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
I1212 19:30:42.070187 140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
I1212 19:30:42.070313 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.070579 140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
I1212 19:30:42.070931 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:42.070960 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.070968 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.071126 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.071217 140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
I1212 19:30:42.071474 140968 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1212 19:30:42.071809 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:42.071832 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:42.071850 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.071865 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.072037 140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
I1212 19:30:42.072268 140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
I1212 19:30:42.072509 140968 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1212 19:30:42.072527 140968 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1212 19:30:42.074808 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.075214 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:42.075255 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:42.075465 140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
W1212 19:30:42.498183 140968 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:45836->192.168.39.202:22: read: connection reset by peer
I1212 19:30:42.498224 140968 retry.go:31] will retry after 307.207548ms: ssh: handshake failed: read tcp 192.168.39.1:45836->192.168.39.202:22: read: connection reset by peer
I1212 19:30:42.788735 140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1212 19:30:42.882868 140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1212 19:30:43.007536 140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1212 19:30:43.030545 140968 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
I1212 19:30:43.030570 140968 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1212 19:30:43.037875 140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1212 19:30:43.043916 140968 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1212 19:30:43.043938 140968 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1212 19:30:43.045010 140968 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
I1212 19:30:43.045028 140968 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1212 19:30:43.076915 140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
I1212 19:30:43.091459 140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1212 19:30:43.135445 140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1212 19:30:43.161314 140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1212 19:30:43.228759 140968 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1212 19:30:43.228791 140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1212 19:30:43.256924 140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1212 19:30:43.316411 140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1212 19:30:43.372722 140968 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.334319389s)
I1212 19:30:43.372845 140968 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.330204686s)
I1212 19:30:43.372944 140968 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1212 19:30:43.372943 140968 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1212 19:30:43.537404 140968 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
I1212 19:30:43.537440 140968 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1212 19:30:43.642927 140968 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1212 19:30:43.642954 140968 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1212 19:30:43.806680 140968 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
I1212 19:30:43.806715 140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1212 19:30:43.933963 140968 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1212 19:30:43.933999 140968 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1212 19:30:43.982194 140968 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
I1212 19:30:43.982232 140968 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1212 19:30:44.006492 140968 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1212 19:30:44.006520 140968 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1212 19:30:44.047583 140968 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1212 19:30:44.047621 140968 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1212 19:30:44.059717 140968 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
I1212 19:30:44.059745 140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1212 19:30:44.071183 140968 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1212 19:30:44.071214 140968 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1212 19:30:44.171789 140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1212 19:30:44.366074 140968 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1212 19:30:44.366121 140968 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1212 19:30:44.376527 140968 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1212 19:30:44.376551 140968 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1212 19:30:44.398265 140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1212 19:30:44.448003 140968 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1212 19:30:44.448030 140968 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1212 19:30:44.544019 140968 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1212 19:30:44.544047 140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1212 19:30:44.571554 140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1212 19:30:44.712676 140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1212 19:30:44.898418 140968 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1212 19:30:44.898462 140968 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1212 19:30:45.105077 140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.316291183s)
I1212 19:30:45.278698 140968 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1212 19:30:45.278734 140968 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1212 19:30:45.888452 140968 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1212 19:30:45.888492 140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1212 19:30:46.324416 140968 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1212 19:30:46.324451 140968 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1212 19:30:46.659047 140968 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1212 19:30:46.659073 140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1212 19:30:46.894583 140968 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1212 19:30:46.894608 140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1212 19:30:47.380602 140968 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1212 19:30:47.380630 140968 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1212 19:30:47.642669 140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1212 19:30:48.456497 140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.573587528s)
I1212 19:30:48.456594 140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.449019797s)
I1212 19:30:48.456673 140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.41876682s)
I1212 19:30:48.495601 140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (5.418638209s)
I1212 19:30:48.495665 140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.40417141s)
I1212 19:30:49.485218 140968 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1212 19:30:49.488385 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:49.488853 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:49.488887 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:49.489054 140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
I1212 19:30:49.891240 140968 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1212 19:30:50.021210 140968 addons.go:239] Setting addon gcp-auth=true in "addons-347541"
I1212 19:30:50.021279 140968 host.go:66] Checking if "addons-347541" exists ...
I1212 19:30:50.023291 140968 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1212 19:30:50.026057 140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:50.026518 140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
I1212 19:30:50.026550 140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
I1212 19:30:50.026719 140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
I1212 19:30:51.390040 140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.254520086s)
I1212 19:30:51.390090 140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.228733885s)
I1212 19:30:51.390101 140968 addons.go:495] Verifying addon ingress=true in "addons-347541"
I1212 19:30:51.390143 140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.133179371s)
I1212 19:30:51.390194 140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.073741466s)
I1212 19:30:51.390251 140968 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.017214716s)
I1212 19:30:51.390278 140968 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1212 19:30:51.390233 140968 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.017269992s)
I1212 19:30:51.390355 140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.218535791s)
I1212 19:30:51.390376 140968 addons.go:495] Verifying addon registry=true in "addons-347541"
I1212 19:30:51.390444 140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.992113142s)
I1212 19:30:51.390499 140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.818908881s)
I1212 19:30:51.390524 140968 addons.go:495] Verifying addon metrics-server=true in "addons-347541"
I1212 19:30:51.390626 140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.677917582s)
W1212 19:30:51.391126 140968 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1212 19:30:51.391154 140968 retry.go:31] will retry after 264.780265ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1212 19:30:51.391425 140968 out.go:179] * Verifying registry addon...
I1212 19:30:51.391426 140968 out.go:179] * Verifying ingress addon...
I1212 19:30:51.391488 140968 node_ready.go:35] waiting up to 6m0s for node "addons-347541" to be "Ready" ...
I1212 19:30:51.392152 140968 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-347541 service yakd-dashboard -n yakd-dashboard
I1212 19:30:51.394251 140968 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1212 19:30:51.394319 140968 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1212 19:30:51.444483 140968 node_ready.go:49] node "addons-347541" is "Ready"
I1212 19:30:51.444516 140968 node_ready.go:38] duration metric: took 52.68821ms for node "addons-347541" to be "Ready" ...
I1212 19:30:51.444533 140968 api_server.go:52] waiting for apiserver process to appear ...
I1212 19:30:51.444594 140968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 19:30:51.463301 140968 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1212 19:30:51.463337 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:30:51.463301 140968 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1212 19:30:51.463361 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
W1212 19:30:51.478349 140968 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I1212 19:30:51.656538 140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1212 19:30:51.905878 140968 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-347541" context rescaled to 1 replicas
I1212 19:30:51.907526 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:30:51.908023 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:30:52.427666 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:30:52.427756 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:30:52.610896 140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.968152855s)
I1212 19:30:52.610931 140968 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.587609311s)
I1212 19:30:52.610958 140968 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-347541"
I1212 19:30:52.610993 140968 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.166377678s)
I1212 19:30:52.611020 140968 api_server.go:72] duration metric: took 10.572555968s to wait for apiserver process to appear ...
I1212 19:30:52.611158 140968 api_server.go:88] waiting for apiserver healthz status ...
I1212 19:30:52.611212 140968 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
I1212 19:30:52.612369 140968 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1212 19:30:52.613118 140968 out.go:179] * Verifying csi-hostpath-driver addon...
I1212 19:30:52.614400 140968 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1212 19:30:52.615143 140968 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1212 19:30:52.615713 140968 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1212 19:30:52.615728 140968 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1212 19:30:52.646195 140968 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1212 19:30:52.646216 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:30:52.646765 140968 api_server.go:279] https://192.168.39.202:8443/healthz returned 200:
ok
I1212 19:30:52.650199 140968 api_server.go:141] control plane version: v1.34.2
I1212 19:30:52.650229 140968 api_server.go:131] duration metric: took 39.061885ms to wait for apiserver health ...
I1212 19:30:52.650259 140968 system_pods.go:43] waiting for kube-system pods to appear ...
I1212 19:30:52.684054 140968 system_pods.go:59] 20 kube-system pods found
I1212 19:30:52.684092 140968 system_pods.go:61] "amd-gpu-device-plugin-2xl4r" [ede87043-19cb-485d-8eb9-d84d809cdc54] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1212 19:30:52.684100 140968 system_pods.go:61] "coredns-66bc5c9577-vvxxj" [5d9292f5-1548-47ef-a76a-f488221712e1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1212 19:30:52.684125 140968 system_pods.go:61] "coredns-66bc5c9577-zf7x7" [193b24c3-32e5-4ca1-bebb-0a249a6a436e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1212 19:30:52.684134 140968 system_pods.go:61] "csi-hostpath-attacher-0" [7500a8ca-2ffc-4d75-ae8c-e49175987633] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1212 19:30:52.684138 140968 system_pods.go:61] "csi-hostpath-resizer-0" [3b07e95c-174c-43a1-b28e-d07f71af1028] Pending
I1212 19:30:52.684145 140968 system_pods.go:61] "csi-hostpathplugin-mkfcn" [fe53d3cb-3e18-4853-9fdd-2c0f5b822937] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1212 19:30:52.684148 140968 system_pods.go:61] "etcd-addons-347541" [cbb27be6-7d31-4137-9e8e-81f5778d9889] Running
I1212 19:30:52.684153 140968 system_pods.go:61] "kube-apiserver-addons-347541" [65becc66-812b-4417-8600-67b7408d63e8] Running
I1212 19:30:52.684157 140968 system_pods.go:61] "kube-controller-manager-addons-347541" [f2af5f9f-d9b9-4469-b80f-08bfe2e19358] Running
I1212 19:30:52.684162 140968 system_pods.go:61] "kube-ingress-dns-minikube" [2b04ee36-5eba-4b96-995d-1a77e2ddb46b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1212 19:30:52.684165 140968 system_pods.go:61] "kube-proxy-x5bxp" [1efedaf7-228f-4318-bd8c-a85d80dd0b77] Running
I1212 19:30:52.684169 140968 system_pods.go:61] "kube-scheduler-addons-347541" [b7da366a-1bbf-480f-8187-28545db9ed0a] Running
I1212 19:30:52.684173 140968 system_pods.go:61] "metrics-server-85b7d694d7-tmr5k" [5dd23de9-3bea-45d2-b80b-4b966bf80193] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1212 19:30:52.684179 140968 system_pods.go:61] "nvidia-device-plugin-daemonset-s9zn5" [9049612d-22d5-42ee-a561-b6acda7ef4e9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1212 19:30:52.684185 140968 system_pods.go:61] "registry-6b586f9694-5td7r" [201134be-c27b-4ed0-83ec-71d107dac0c0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1212 19:30:52.684190 140968 system_pods.go:61] "registry-creds-764b6fb674-2lqlc" [8bde3033-d2e9-4aa8-85ec-6849a565941b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1212 19:30:52.684194 140968 system_pods.go:61] "registry-proxy-gxsjd" [0943e635-926e-40e1-9444-adcc285ac289] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1212 19:30:52.684200 140968 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4kgt2" [073ae593-9fae-4668-912c-99370421b081] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1212 19:30:52.684210 140968 system_pods.go:61] "snapshot-controller-7d9fbc56b8-krfxw" [77651017-de3a-4f06-851e-1650fb810697] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1212 19:30:52.684213 140968 system_pods.go:61] "storage-provisioner" [1f852b24-b5fe-4b85-8007-74282a8e3746] Running
I1212 19:30:52.684220 140968 system_pods.go:74] duration metric: took 33.955869ms to wait for pod list to return data ...
I1212 19:30:52.684229 140968 default_sa.go:34] waiting for default service account to be created ...
I1212 19:30:52.701443 140968 default_sa.go:45] found service account: "default"
I1212 19:30:52.701469 140968 default_sa.go:55] duration metric: took 17.235107ms for default service account to be created ...
I1212 19:30:52.701480 140968 system_pods.go:116] waiting for k8s-apps to be running ...
I1212 19:30:52.741834 140968 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1212 19:30:52.741868 140968 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1212 19:30:52.770410 140968 system_pods.go:86] 20 kube-system pods found
I1212 19:30:52.770476 140968 system_pods.go:89] "amd-gpu-device-plugin-2xl4r" [ede87043-19cb-485d-8eb9-d84d809cdc54] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1212 19:30:52.770493 140968 system_pods.go:89] "coredns-66bc5c9577-vvxxj" [5d9292f5-1548-47ef-a76a-f488221712e1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1212 19:30:52.770509 140968 system_pods.go:89] "coredns-66bc5c9577-zf7x7" [193b24c3-32e5-4ca1-bebb-0a249a6a436e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1212 19:30:52.770520 140968 system_pods.go:89] "csi-hostpath-attacher-0" [7500a8ca-2ffc-4d75-ae8c-e49175987633] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1212 19:30:52.770529 140968 system_pods.go:89] "csi-hostpath-resizer-0" [3b07e95c-174c-43a1-b28e-d07f71af1028] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1212 19:30:52.770544 140968 system_pods.go:89] "csi-hostpathplugin-mkfcn" [fe53d3cb-3e18-4853-9fdd-2c0f5b822937] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1212 19:30:52.770550 140968 system_pods.go:89] "etcd-addons-347541" [cbb27be6-7d31-4137-9e8e-81f5778d9889] Running
I1212 19:30:52.770557 140968 system_pods.go:89] "kube-apiserver-addons-347541" [65becc66-812b-4417-8600-67b7408d63e8] Running
I1212 19:30:52.770564 140968 system_pods.go:89] "kube-controller-manager-addons-347541" [f2af5f9f-d9b9-4469-b80f-08bfe2e19358] Running
I1212 19:30:52.770573 140968 system_pods.go:89] "kube-ingress-dns-minikube" [2b04ee36-5eba-4b96-995d-1a77e2ddb46b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1212 19:30:52.770580 140968 system_pods.go:89] "kube-proxy-x5bxp" [1efedaf7-228f-4318-bd8c-a85d80dd0b77] Running
I1212 19:30:52.770586 140968 system_pods.go:89] "kube-scheduler-addons-347541" [b7da366a-1bbf-480f-8187-28545db9ed0a] Running
I1212 19:30:52.770606 140968 system_pods.go:89] "metrics-server-85b7d694d7-tmr5k" [5dd23de9-3bea-45d2-b80b-4b966bf80193] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1212 19:30:52.770625 140968 system_pods.go:89] "nvidia-device-plugin-daemonset-s9zn5" [9049612d-22d5-42ee-a561-b6acda7ef4e9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1212 19:30:52.770634 140968 system_pods.go:89] "registry-6b586f9694-5td7r" [201134be-c27b-4ed0-83ec-71d107dac0c0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1212 19:30:52.770644 140968 system_pods.go:89] "registry-creds-764b6fb674-2lqlc" [8bde3033-d2e9-4aa8-85ec-6849a565941b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1212 19:30:52.770652 140968 system_pods.go:89] "registry-proxy-gxsjd" [0943e635-926e-40e1-9444-adcc285ac289] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1212 19:30:52.770661 140968 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4kgt2" [073ae593-9fae-4668-912c-99370421b081] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1212 19:30:52.770672 140968 system_pods.go:89] "snapshot-controller-7d9fbc56b8-krfxw" [77651017-de3a-4f06-851e-1650fb810697] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1212 19:30:52.770684 140968 system_pods.go:89] "storage-provisioner" [1f852b24-b5fe-4b85-8007-74282a8e3746] Running
I1212 19:30:52.770699 140968 system_pods.go:126] duration metric: took 69.208924ms to wait for k8s-apps to be running ...
I1212 19:30:52.770714 140968 system_svc.go:44] waiting for kubelet service to be running ....
I1212 19:30:52.770801 140968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1212 19:30:52.806613 140968 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1212 19:30:52.806646 140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1212 19:30:52.903397 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:30:52.904197 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:30:52.921662 140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1212 19:30:53.122733 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:30:53.402024 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:30:53.404763 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:30:53.621194 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:30:53.639676 140968 system_svc.go:56] duration metric: took 868.95016ms WaitForService to wait for kubelet
I1212 19:30:53.639723 140968 kubeadm.go:587] duration metric: took 11.601255184s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1212 19:30:53.639782 140968 node_conditions.go:102] verifying NodePressure condition ...
I1212 19:30:53.639678 140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.983084615s)
I1212 19:30:53.656218 140968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1212 19:30:53.656253 140968 node_conditions.go:123] node cpu capacity is 2
I1212 19:30:53.656307 140968 node_conditions.go:105] duration metric: took 16.509424ms to run NodePressure ...
I1212 19:30:53.656324 140968 start.go:242] waiting for startup goroutines ...
I1212 19:30:53.906953 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:30:53.907946 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:30:54.041289 140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.119580351s)
I1212 19:30:54.042533 140968 addons.go:495] Verifying addon gcp-auth=true in "addons-347541"
I1212 19:30:54.044707 140968 out.go:179] * Verifying gcp-auth addon...
I1212 19:30:54.046392 140968 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1212 19:30:54.054336 140968 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1212 19:30:54.054374 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:30:54.119123 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:30:54.402755 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:30:54.404239 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:30:54.550691 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:30:54.621686 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:30:54.901411 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:30:54.902091 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:30:55.051241 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:30:55.120906 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:30:55.399501 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:30:55.403295 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:30:55.553138 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:30:55.653748 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:30:55.900150 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:30:55.901278 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:30:56.051717 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:30:56.121075 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:30:56.401913 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:30:56.402238 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:30:56.550251 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:30:56.620874 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:30:56.901320 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:30:56.901330 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:30:57.054097 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:30:57.156829 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:30:57.398857 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:30:57.399202 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:30:57.550157 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:30:57.619218 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:30:57.898296 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:30:57.898680 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:30:58.049911 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:30:58.119202 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:30:58.399133 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:30:58.399342 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:30:58.550352 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:30:58.618961 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:30:58.898354 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:30:58.898469 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:30:59.050452 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:30:59.121928 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:30:59.400505 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:30:59.400740 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:30:59.549628 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:30:59.621077 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:30:59.898822 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:30:59.899026 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:00.053082 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:00.154285 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:00.398298 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:00.399316 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:00.550630 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:00.619457 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:00.898321 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:00.898348 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:01.078462 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:01.125059 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:01.400696 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:01.402422 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:01.550220 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:01.618540 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:01.901675 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:01.902037 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:02.050678 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:02.119657 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:02.397449 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:02.397732 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:02.550518 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:02.620063 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:02.900899 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:02.901591 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:03.050761 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:03.118706 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:03.400305 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:03.401646 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:03.551215 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:03.619673 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:03.902052 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:03.904186 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:04.054616 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:04.118764 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:04.400903 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:04.402989 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:04.550063 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:04.620004 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:04.900396 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:04.900852 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:05.049917 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:05.122124 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:05.507475 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:05.507970 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:05.601388 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:05.618171 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:05.899716 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:05.904000 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:06.050422 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:06.119722 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:06.398903 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:06.399026 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:06.550481 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:06.619348 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:06.898291 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:06.898578 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:07.051570 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:07.153083 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:07.398542 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:07.399038 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:07.550674 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:07.619042 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:07.899297 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:07.899533 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:08.049859 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:08.121265 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:08.397754 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:08.397785 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:08.550031 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:08.620204 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:08.900325 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:08.901564 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:09.051310 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:09.126166 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:09.401057 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:09.401266 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:09.651025 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:09.651771 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:09.900963 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:09.903133 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:10.053312 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:10.155561 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:10.398960 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:10.399675 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:10.550157 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:10.620078 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:10.897413 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:10.897425 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:11.059270 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:11.118889 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:11.398676 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:11.398864 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:11.549956 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:11.619289 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:11.897902 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:11.898179 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:12.050363 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:12.119308 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:12.398594 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:12.399222 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:12.551150 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:12.624128 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:12.900795 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:12.900941 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:13.050473 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:13.120663 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:13.400690 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:13.401090 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:13.550183 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:13.620684 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:13.902954 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:13.904260 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:14.050132 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:14.121477 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:14.400772 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:14.401965 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:14.549941 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:14.620311 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:14.901437 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:14.901996 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:15.052104 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:15.120888 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:15.399601 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:15.400307 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:15.552178 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:15.621581 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:15.900641 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:15.901172 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:16.078158 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:16.120257 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:16.403710 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:16.405449 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:16.549130 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:16.619606 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:16.899673 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:16.899878 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:17.050254 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:17.119061 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:17.399384 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:17.404838 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:17.551889 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:17.620314 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:17.978047 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:17.980576 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:18.050688 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:18.120316 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:18.399818 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:18.400409 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:18.550185 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:18.627811 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:18.899944 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:18.902256 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:19.050665 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:19.118731 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:19.403003 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:19.403169 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:19.550972 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:19.620101 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:19.899161 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:19.899217 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:20.051791 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:20.119730 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:20.400830 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:20.400829 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:20.550068 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:20.619993 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:20.897580 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:20.897802 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:21.049907 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:21.119602 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:21.398008 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:21.398012 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:21.550644 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:21.618977 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:21.898493 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:21.899061 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:22.055944 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:22.122547 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:22.399184 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:22.401191 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:22.552566 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:22.622209 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:22.899983 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:22.900890 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:23.051011 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:23.119925 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:23.398228 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:23.398845 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:23.550193 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:23.620131 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:23.898540 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:23.898678 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:24.077827 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:24.121848 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:24.399847 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:24.399948 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:24.553432 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:24.622004 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:24.899535 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:24.899830 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:25.051067 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:25.119449 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:25.403832 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:25.403958 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1212 19:31:25.554792 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:25.618938 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:25.900264 140968 kapi.go:107] duration metric: took 34.505937199s to wait for kubernetes.io/minikube-addons=registry ...
I1212 19:31:25.900411 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:26.051457 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:26.153902 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:26.400498 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:26.549924 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:26.620491 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:26.897710 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:27.051694 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:27.152649 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:27.400482 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:27.550182 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:27.651054 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:27.898350 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:28.050588 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:28.119391 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:28.397542 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:28.561580 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:28.623576 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:28.899256 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:29.053908 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:29.120206 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:29.399043 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:29.553181 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:29.621184 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:29.898391 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:30.051184 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:30.121523 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:30.399297 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:30.552579 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:30.620386 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:30.899760 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:31.079925 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:31.120592 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:31.397370 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:31.551277 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:31.619902 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:31.899483 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:32.050052 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:32.121228 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:32.397600 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:32.552043 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:32.619982 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:32.898363 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:33.051325 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:33.118373 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:33.397523 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:33.549562 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:33.619154 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:33.898372 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:34.049238 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:34.119119 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:34.399251 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:34.551870 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:34.628126 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:34.901043 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:35.053544 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:35.118924 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:35.397959 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:35.550802 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:35.620394 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:35.900359 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:36.054065 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:36.121752 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:36.398748 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:36.550521 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:36.621151 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:36.897678 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:37.064198 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:37.121209 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:37.397610 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:37.553846 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:37.619993 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:37.927783 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:38.053007 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:38.153285 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:38.397619 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:38.550502 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:38.652053 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:38.898472 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:39.050180 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:39.119198 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:39.397206 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:39.551253 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:39.618818 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:39.898829 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:40.049831 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:40.118880 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:40.401449 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:40.551225 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:40.619794 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:40.901873 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:41.050034 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:41.123689 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:41.398834 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:41.550242 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:41.618371 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:41.900032 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:42.050501 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:42.120243 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:42.403296 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:42.550702 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:42.621778 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:42.898133 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:43.051781 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:43.118545 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:43.398318 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:43.552749 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:43.622169 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:43.899465 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:44.053725 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:44.119808 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:44.399007 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:44.556427 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:44.624674 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:44.903667 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:45.050903 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:45.120297 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:45.399154 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:45.552668 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:45.618326 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:45.900435 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:46.048936 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:46.124074 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:46.404824 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:46.550820 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:46.619100 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:46.898752 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:47.049672 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:47.123633 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:47.398937 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:47.549944 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:47.619732 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:47.898003 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:48.057437 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:48.157688 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:48.399498 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:48.550093 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:48.619493 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:48.897818 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:49.050342 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:49.118570 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:49.398985 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:49.692447 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:49.693089 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:49.902721 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:50.053347 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:50.120416 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:50.398441 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:50.552841 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:50.619664 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:50.899356 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:51.050944 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:51.129581 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:51.397936 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:51.550617 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:51.620135 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:51.899129 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:52.054532 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:52.121634 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:52.399718 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:52.551029 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:52.623633 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:52.926041 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:53.052799 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:53.153297 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:53.398085 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:53.549891 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:53.621587 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:53.899754 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:54.051985 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:54.153768 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:54.398184 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:54.551768 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:54.619345 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:54.900008 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:55.050400 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:55.119456 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:55.399661 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:55.553589 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:55.620189 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:55.898597 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:56.049442 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:56.121737 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:56.399445 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:56.549388 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:56.618810 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:56.898602 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:57.051214 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:57.120020 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:57.399673 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:57.551795 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:57.619238 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:57.901077 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:58.074298 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:58.136957 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:58.401587 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:58.550946 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:58.620900 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:58.900444 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:59.054402 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:59.122242 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:59.398219 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:31:59.555651 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:31:59.622353 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:31:59.897534 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:32:00.055385 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:32:00.121790 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:32:00.399186 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:32:00.552977 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:32:00.619695 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:32:00.897986 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:32:01.051739 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:32:01.120317 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:32:01.398673 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:32:01.550442 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:32:01.621105 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:32:01.898352 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:32:02.052148 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:32:02.121278 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:32:02.399049 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:32:02.706042 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:32:02.707339 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:32:02.901684 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:32:03.051382 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:32:03.120836 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:32:03.398769 140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1212 19:32:03.550741 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:32:03.618995 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:32:03.898979 140968 kapi.go:107] duration metric: took 1m12.504725695s to wait for app.kubernetes.io/name=ingress-nginx ...
I1212 19:32:04.052014 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:32:04.121228 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:32:04.552556 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:32:04.620192 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:32:05.050316 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:32:05.118849 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:32:05.550083 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:32:05.619382 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1212 19:32:06.050213 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:32:06.119620 140968 kapi.go:107] duration metric: took 1m13.504471387s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1212 19:32:06.551144 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:32:07.050465 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:32:07.550586 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:32:08.054688 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:32:08.553261 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:32:09.051320 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:32:09.552445 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:32:10.052347 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:32:10.550608 140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1212 19:32:11.050544 140968 kapi.go:107] duration metric: took 1m17.004149978s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1212 19:32:11.052025 140968 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-347541 cluster.
I1212 19:32:11.053102 140968 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1212 19:32:11.054183 140968 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1212 19:32:11.055308 140968 out.go:179] * Enabled addons: nvidia-device-plugin, ingress-dns, storage-provisioner, cloud-spanner, inspektor-gadget, amd-gpu-device-plugin, registry-creds, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
I1212 19:32:11.056352 140968 addons.go:530] duration metric: took 1m29.017892967s for enable addons: enabled=[nvidia-device-plugin ingress-dns storage-provisioner cloud-spanner inspektor-gadget amd-gpu-device-plugin registry-creds metrics-server yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
I1212 19:32:11.056390 140968 start.go:247] waiting for cluster config update ...
I1212 19:32:11.056407 140968 start.go:256] writing updated cluster config ...
I1212 19:32:11.056664 140968 ssh_runner.go:195] Run: rm -f paused
I1212 19:32:11.062267 140968 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1212 19:32:11.065991 140968 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vvxxj" in "kube-system" namespace to be "Ready" or be gone ...
I1212 19:32:11.070583 140968 pod_ready.go:94] pod "coredns-66bc5c9577-vvxxj" is "Ready"
I1212 19:32:11.070603 140968 pod_ready.go:86] duration metric: took 4.589865ms for pod "coredns-66bc5c9577-vvxxj" in "kube-system" namespace to be "Ready" or be gone ...
I1212 19:32:11.072585 140968 pod_ready.go:83] waiting for pod "etcd-addons-347541" in "kube-system" namespace to be "Ready" or be gone ...
I1212 19:32:11.076835 140968 pod_ready.go:94] pod "etcd-addons-347541" is "Ready"
I1212 19:32:11.076853 140968 pod_ready.go:86] duration metric: took 4.250439ms for pod "etcd-addons-347541" in "kube-system" namespace to be "Ready" or be gone ...
I1212 19:32:11.078993 140968 pod_ready.go:83] waiting for pod "kube-apiserver-addons-347541" in "kube-system" namespace to be "Ready" or be gone ...
I1212 19:32:11.085172 140968 pod_ready.go:94] pod "kube-apiserver-addons-347541" is "Ready"
I1212 19:32:11.085190 140968 pod_ready.go:86] duration metric: took 6.180955ms for pod "kube-apiserver-addons-347541" in "kube-system" namespace to be "Ready" or be gone ...
I1212 19:32:11.087903 140968 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-347541" in "kube-system" namespace to be "Ready" or be gone ...
I1212 19:32:11.466780 140968 pod_ready.go:94] pod "kube-controller-manager-addons-347541" is "Ready"
I1212 19:32:11.466810 140968 pod_ready.go:86] duration metric: took 378.889786ms for pod "kube-controller-manager-addons-347541" in "kube-system" namespace to be "Ready" or be gone ...
I1212 19:32:11.666238 140968 pod_ready.go:83] waiting for pod "kube-proxy-x5bxp" in "kube-system" namespace to be "Ready" or be gone ...
I1212 19:32:12.066028 140968 pod_ready.go:94] pod "kube-proxy-x5bxp" is "Ready"
I1212 19:32:12.066058 140968 pod_ready.go:86] duration metric: took 399.793535ms for pod "kube-proxy-x5bxp" in "kube-system" namespace to be "Ready" or be gone ...
I1212 19:32:12.266206 140968 pod_ready.go:83] waiting for pod "kube-scheduler-addons-347541" in "kube-system" namespace to be "Ready" or be gone ...
I1212 19:32:12.667224 140968 pod_ready.go:94] pod "kube-scheduler-addons-347541" is "Ready"
I1212 19:32:12.667253 140968 pod_ready.go:86] duration metric: took 401.02482ms for pod "kube-scheduler-addons-347541" in "kube-system" namespace to be "Ready" or be gone ...
I1212 19:32:12.667265 140968 pod_ready.go:40] duration metric: took 1.604968059s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1212 19:32:12.713555 140968 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
I1212 19:32:12.716172 140968 out.go:179] * Done! kubectl is now configured to use "addons-347541" cluster and "default" namespace by default
==> CRI-O <==
Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.751524705Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3a73781e60a95e8b2a43149459448fd7c7e33dc7082e8230749f5479db18a37e,PodSandboxId:d1bc8541b182df3000fab9ab6672740aafcf7a7b30783934ee5699a6cd87946c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765567979389597466,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28bd2e4c-a606-45ae-bff8-93cc740702b2,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e57e904e6080a43f0c054c3ca11aacc514a784efd56578147be73d316fdc7363,PodSandboxId:003c1dd6f275a26397bde10bac721f2c972d32f9770501bdbc84ca1ebe403c43,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765567937896034608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86482a73-fed6-4ee2-93dd-8079de7542f0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ec93bc4394f3154447edaf2dccb4600daaf9b0499f3d4333a07e22e7d58673c,PodSandboxId:a46ac965878cf313218d8cc7223a8bdd5bb30542c5f584bc36d2861d6fb1f31e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765567922853623140,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-hppl2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7b5c91af-8fc7-4d47-875e-d78a54b2c59f,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5728418319c3823fde8ec0d5902908261a293f2719f68770fcf113e98bdce493,PodSandboxId:e5663746f2894d5e7c986087132b0c4043e181abee9a49a34a06356e29fe8c44,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e
,State:CONTAINER_EXITED,CreatedAt:1765567904755015404,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-twfg2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8f2f4060-276e-41bb-bed9-c734bf5967ce,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b90859836907690570ed8f4f5cbc16fb5d0b64660f4f1bae895049c4c8514d,PodSandboxId:f6b67a56ec25ed99a6254ee2918988d2702c5d588a8c4e96230d61c5cf974c24,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765567904553571552,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-pdz68,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 414c9a1f-9f9e-44b4-be77-987eadd5f18c,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2171eb5eb101a13d23803b5a8ebf0a86a8d17b45e7acbca8b43ba049f1b7512,PodSandboxId:a0471b2e05c0ab96a47f67c6fb7fdb8ee11a2762c72e3f21b7239a8255324897,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765567901140077918,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-4tdnr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c246219c-ebf0-4567-bacc-ed288d17a0e1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29cb209f08af1036fc97325bb4faaca22221448f614c69562527ca5dd4a9b13b,PodSandboxId:2dc32e7104f9974f471e97b0147337c2171986526e4fea85e43675d7f3ae83b2,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7
,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765567879411513042,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b04ee36-5eba-4b96-995d-1a77e2ddb46b,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f0e7375564e7015b2d15f50b51e3fb436315a9f6c364ec02fdd5c59190723c,PodSandboxId:f33474ff319cb99f546ffff3e938a58d9cff6269b69eda57c424b42ae86e876e,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0
,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765567859984510571,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-2xl4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede87043-19cb-485d-8eb9-d84d809cdc54,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa121a6614b9cb7f5e3f51937ba612d6ce2cf89d1dde25294b15039e14722e83,PodSandboxId:c9804addfe1ef68626ad31a2a5dddaca92997464acd44f1013af73e997e42e5d,Metadata:&ContainerMetad
ata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765567849703328907,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f852b24-b5fe-4b85-8007-74282a8e3746,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4a0b61582ab330327b4249677dc6b464244ac28ab0a195dc7dbdf4a6dbf6b28,PodSandboxId:e494954752c0ff47cccd80ea99aba276d5dcfc49147e30b0eaba4630ea02883b,Metadata:&ContainerMetadata{Name:cor
edns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765567843647449435,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vvxxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9292f5-1548-47ef-a76a-f488221712e1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b5a1c18d28614bf90454b948d289136b0f20c9341fe303e084b91bd607c3c0,PodSandboxId:46b7fe6eaddbb5a0738092f9932330558dd699058a0c1ecbdd81313662caae5e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765567842961644101,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x5bxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1efedaf7-228f-4318-bd8c-a85d80dd0b77,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3e4a1f5db77865ef545f2488910456b56b11d204019cb86b5f5c0cc1d270cc0,PodSandboxId:1c7dc9dc3acc41ce092c7cbaf82d78103ff5f0cb1e52591555183d9316bf9980,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765567830144455928,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce3b7a702fbef9a6121b414f85545a0,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":
\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da6f6f7fbcb408d5e35a803b554ef80fbe5b17a42b5c7b4dffc8e376aff7c5d3,PodSandboxId:5c3481bbc735f57e8e99bfc3693ca4746ed92c3acf843246ef44a7239802eeaf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765567830115776659,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e098927010764c91c96aa66fd9ba6efc,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993da190cfa742a9b8f23f3ae63ccae627606b22fdf705984f1cd8e26c9054f5,PodSandboxId:f0a593f81c8ac21d6b183fedd115a17bc50b72d69bf90826e37a050976ccbdee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765567830099509594,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40eda06
a75df0cd69d41a597946a693,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a66ca19ad2e53e2d61831c7925ed52e18848199736d2911d817c709827eda5,PodSandboxId:61e9bb926854de445e9d469dbc8eb0bf4a1494fd0965e66adf02270a40446bbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765567830089573242,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a47e93b8e8cc0ae4fe59ac4b3e6151,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bfb8f0e2-dec2-4ff9-a0a1-688ee158fc77 name=/runtime.v1.RuntimeService/ListContainers
Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.761930170Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=cbd822b4-636b-489a-a0f4-1a102568cbcd name=/runtime.v1.RuntimeService/Version
Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.761996140Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cbd822b4-636b-489a-a0f4-1a102568cbcd name=/runtime.v1.RuntimeService/Version
Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.785710262Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a5d90005-bc37-484a-b557-3d492b235c54 name=/runtime.v1.RuntimeService/Version
Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.785808342Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a5d90005-bc37-484a-b557-3d492b235c54 name=/runtime.v1.RuntimeService/Version
Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.787157454Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ae4e5d5-f6cd-42e6-b809-5a99b3a49346 name=/runtime.v1.ImageService/ImageFsInfo
Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.788402367Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765568122788347717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ae4e5d5-f6cd-42e6-b809-5a99b3a49346 name=/runtime.v1.ImageService/ImageFsInfo
Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.789235021Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb8295ab-4d9e-46ff-9f02-d3cfaa5476f3 name=/runtime.v1.RuntimeService/ListContainers
Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.789583270Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb8295ab-4d9e-46ff-9f02-d3cfaa5476f3 name=/runtime.v1.RuntimeService/ListContainers
Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.790088512Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3a73781e60a95e8b2a43149459448fd7c7e33dc7082e8230749f5479db18a37e,PodSandboxId:d1bc8541b182df3000fab9ab6672740aafcf7a7b30783934ee5699a6cd87946c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765567979389597466,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28bd2e4c-a606-45ae-bff8-93cc740702b2,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e57e904e6080a43f0c054c3ca11aacc514a784efd56578147be73d316fdc7363,PodSandboxId:003c1dd6f275a26397bde10bac721f2c972d32f9770501bdbc84ca1ebe403c43,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765567937896034608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86482a73-fed6-4ee2-93dd-8079de7542f0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ec93bc4394f3154447edaf2dccb4600daaf9b0499f3d4333a07e22e7d58673c,PodSandboxId:a46ac965878cf313218d8cc7223a8bdd5bb30542c5f584bc36d2861d6fb1f31e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765567922853623140,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-hppl2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7b5c91af-8fc7-4d47-875e-d78a54b2c59f,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5728418319c3823fde8ec0d5902908261a293f2719f68770fcf113e98bdce493,PodSandboxId:e5663746f2894d5e7c986087132b0c4043e181abee9a49a34a06356e29fe8c44,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e
,State:CONTAINER_EXITED,CreatedAt:1765567904755015404,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-twfg2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8f2f4060-276e-41bb-bed9-c734bf5967ce,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b90859836907690570ed8f4f5cbc16fb5d0b64660f4f1bae895049c4c8514d,PodSandboxId:f6b67a56ec25ed99a6254ee2918988d2702c5d588a8c4e96230d61c5cf974c24,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765567904553571552,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-pdz68,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 414c9a1f-9f9e-44b4-be77-987eadd5f18c,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2171eb5eb101a13d23803b5a8ebf0a86a8d17b45e7acbca8b43ba049f1b7512,PodSandboxId:a0471b2e05c0ab96a47f67c6fb7fdb8ee11a2762c72e3f21b7239a8255324897,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765567901140077918,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-4tdnr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c246219c-ebf0-4567-bacc-ed288d17a0e1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29cb209f08af1036fc97325bb4faaca22221448f614c69562527ca5dd4a9b13b,PodSandboxId:2dc32e7104f9974f471e97b0147337c2171986526e4fea85e43675d7f3ae83b2,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7
,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765567879411513042,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b04ee36-5eba-4b96-995d-1a77e2ddb46b,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f0e7375564e7015b2d15f50b51e3fb436315a9f6c364ec02fdd5c59190723c,PodSandboxId:f33474ff319cb99f546ffff3e938a58d9cff6269b69eda57c424b42ae86e876e,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0
,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765567859984510571,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-2xl4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede87043-19cb-485d-8eb9-d84d809cdc54,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa121a6614b9cb7f5e3f51937ba612d6ce2cf89d1dde25294b15039e14722e83,PodSandboxId:c9804addfe1ef68626ad31a2a5dddaca92997464acd44f1013af73e997e42e5d,Metadata:&ContainerMetad
ata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765567849703328907,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f852b24-b5fe-4b85-8007-74282a8e3746,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4a0b61582ab330327b4249677dc6b464244ac28ab0a195dc7dbdf4a6dbf6b28,PodSandboxId:e494954752c0ff47cccd80ea99aba276d5dcfc49147e30b0eaba4630ea02883b,Metadata:&ContainerMetadata{Name:cor
edns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765567843647449435,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vvxxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9292f5-1548-47ef-a76a-f488221712e1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b5a1c18d28614bf90454b948d289136b0f20c9341fe303e084b91bd607c3c0,PodSandboxId:46b7fe6eaddbb5a0738092f9932330558dd699058a0c1ecbdd81313662caae5e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765567842961644101,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x5bxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1efedaf7-228f-4318-bd8c-a85d80dd0b77,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3e4a1f5db77865ef545f2488910456b56b11d204019cb86b5f5c0cc1d270cc0,PodSandboxId:1c7dc9dc3acc41ce092c7cbaf82d78103ff5f0cb1e52591555183d9316bf9980,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765567830144455928,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce3b7a702fbef9a6121b414f85545a0,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":
\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da6f6f7fbcb408d5e35a803b554ef80fbe5b17a42b5c7b4dffc8e376aff7c5d3,PodSandboxId:5c3481bbc735f57e8e99bfc3693ca4746ed92c3acf843246ef44a7239802eeaf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765567830115776659,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e098927010764c91c96aa66fd9ba6efc,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993da190cfa742a9b8f23f3ae63ccae627606b22fdf705984f1cd8e26c9054f5,PodSandboxId:f0a593f81c8ac21d6b183fedd115a17bc50b72d69bf90826e37a050976ccbdee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765567830099509594,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40eda06
a75df0cd69d41a597946a693,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a66ca19ad2e53e2d61831c7925ed52e18848199736d2911d817c709827eda5,PodSandboxId:61e9bb926854de445e9d469dbc8eb0bf4a1494fd0965e66adf02270a40446bbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765567830089573242,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a47e93b8e8cc0ae4fe59ac4b3e6151,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb8295ab-4d9e-46ff-9f02-d3cfaa5476f3 name=/runtime.v1.RuntimeService/ListContainers
Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.813582526Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/1.0" file="docker/docker_client.go:631"
Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.820853827Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8036143c-d577-4c97-b63e-60f51cf9be82 name=/runtime.v1.RuntimeService/Version
Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.821017639Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8036143c-d577-4c97-b63e-60f51cf9be82 name=/runtime.v1.RuntimeService/Version
Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.822545089Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1f1720d2-6cf9-4e8d-9686-5edff196c0ce name=/runtime.v1.ImageService/ImageFsInfo
Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.823780109Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765568122823750512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f1720d2-6cf9-4e8d-9686-5edff196c0ce name=/runtime.v1.ImageService/ImageFsInfo
Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.824793820Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=77cbf8e6-dbdf-43d5-ad7f-ea10209f6870 name=/runtime.v1.RuntimeService/ListContainers
Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.824897394Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=77cbf8e6-dbdf-43d5-ad7f-ea10209f6870 name=/runtime.v1.RuntimeService/ListContainers
Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.825194661Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3a73781e60a95e8b2a43149459448fd7c7e33dc7082e8230749f5479db18a37e,PodSandboxId:d1bc8541b182df3000fab9ab6672740aafcf7a7b30783934ee5699a6cd87946c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765567979389597466,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28bd2e4c-a606-45ae-bff8-93cc740702b2,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e57e904e6080a43f0c054c3ca11aacc514a784efd56578147be73d316fdc7363,PodSandboxId:003c1dd6f275a26397bde10bac721f2c972d32f9770501bdbc84ca1ebe403c43,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765567937896034608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86482a73-fed6-4ee2-93dd-8079de7542f0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ec93bc4394f3154447edaf2dccb4600daaf9b0499f3d4333a07e22e7d58673c,PodSandboxId:a46ac965878cf313218d8cc7223a8bdd5bb30542c5f584bc36d2861d6fb1f31e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765567922853623140,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-hppl2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7b5c91af-8fc7-4d47-875e-d78a54b2c59f,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5728418319c3823fde8ec0d5902908261a293f2719f68770fcf113e98bdce493,PodSandboxId:e5663746f2894d5e7c986087132b0c4043e181abee9a49a34a06356e29fe8c44,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e
,State:CONTAINER_EXITED,CreatedAt:1765567904755015404,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-twfg2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8f2f4060-276e-41bb-bed9-c734bf5967ce,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b90859836907690570ed8f4f5cbc16fb5d0b64660f4f1bae895049c4c8514d,PodSandboxId:f6b67a56ec25ed99a6254ee2918988d2702c5d588a8c4e96230d61c5cf974c24,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765567904553571552,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-pdz68,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 414c9a1f-9f9e-44b4-be77-987eadd5f18c,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2171eb5eb101a13d23803b5a8ebf0a86a8d17b45e7acbca8b43ba049f1b7512,PodSandboxId:a0471b2e05c0ab96a47f67c6fb7fdb8ee11a2762c72e3f21b7239a8255324897,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765567901140077918,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-4tdnr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c246219c-ebf0-4567-bacc-ed288d17a0e1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29cb209f08af1036fc97325bb4faaca22221448f614c69562527ca5dd4a9b13b,PodSandboxId:2dc32e7104f9974f471e97b0147337c2171986526e4fea85e43675d7f3ae83b2,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7
,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765567879411513042,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b04ee36-5eba-4b96-995d-1a77e2ddb46b,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f0e7375564e7015b2d15f50b51e3fb436315a9f6c364ec02fdd5c59190723c,PodSandboxId:f33474ff319cb99f546ffff3e938a58d9cff6269b69eda57c424b42ae86e876e,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0
,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765567859984510571,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-2xl4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede87043-19cb-485d-8eb9-d84d809cdc54,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa121a6614b9cb7f5e3f51937ba612d6ce2cf89d1dde25294b15039e14722e83,PodSandboxId:c9804addfe1ef68626ad31a2a5dddaca92997464acd44f1013af73e997e42e5d,Metadata:&ContainerMetad
ata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765567849703328907,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f852b24-b5fe-4b85-8007-74282a8e3746,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4a0b61582ab330327b4249677dc6b464244ac28ab0a195dc7dbdf4a6dbf6b28,PodSandboxId:e494954752c0ff47cccd80ea99aba276d5dcfc49147e30b0eaba4630ea02883b,Metadata:&ContainerMetadata{Name:cor
edns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765567843647449435,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vvxxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9292f5-1548-47ef-a76a-f488221712e1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b5a1c18d28614bf90454b948d289136b0f20c9341fe303e084b91bd607c3c0,PodSandboxId:46b7fe6eaddbb5a0738092f9932330558dd699058a0c1ecbdd81313662caae5e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765567842961644101,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x5bxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1efedaf7-228f-4318-bd8c-a85d80dd0b77,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3e4a1f5db77865ef545f2488910456b56b11d204019cb86b5f5c0cc1d270cc0,PodSandboxId:1c7dc9dc3acc41ce092c7cbaf82d78103ff5f0cb1e52591555183d9316bf9980,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765567830144455928,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce3b7a702fbef9a6121b414f85545a0,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":
\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da6f6f7fbcb408d5e35a803b554ef80fbe5b17a42b5c7b4dffc8e376aff7c5d3,PodSandboxId:5c3481bbc735f57e8e99bfc3693ca4746ed92c3acf843246ef44a7239802eeaf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765567830115776659,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e098927010764c91c96aa66fd9ba6efc,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993da190cfa742a9b8f23f3ae63ccae627606b22fdf705984f1cd8e26c9054f5,PodSandboxId:f0a593f81c8ac21d6b183fedd115a17bc50b72d69bf90826e37a050976ccbdee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765567830099509594,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40eda06
a75df0cd69d41a597946a693,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a66ca19ad2e53e2d61831c7925ed52e18848199736d2911d817c709827eda5,PodSandboxId:61e9bb926854de445e9d469dbc8eb0bf4a1494fd0965e66adf02270a40446bbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765567830089573242,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a47e93b8e8cc0ae4fe59ac4b3e6151,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=77cbf8e6-dbdf-43d5-ad7f-ea10209f6870 name=/runtime.v1.RuntimeService/ListContainers
Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.855153111Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd23bce0-a5c0-454a-8cd3-21894f402bf6 name=/runtime.v1.RuntimeService/Version
Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.855296188Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd23bce0-a5c0-454a-8cd3-21894f402bf6 name=/runtime.v1.RuntimeService/Version
Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.857978061Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5927c9fc-d0be-4b19-9267-508394eeecb9 name=/runtime.v1.ImageService/ImageFsInfo
Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.860693308Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765568122860661173,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5927c9fc-d0be-4b19-9267-508394eeecb9 name=/runtime.v1.ImageService/ImageFsInfo
Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.863293776Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fbdec8a7-2bfc-46eb-a06c-5c65d509a0a9 name=/runtime.v1.RuntimeService/ListContainers
Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.863392969Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fbdec8a7-2bfc-46eb-a06c-5c65d509a0a9 name=/runtime.v1.RuntimeService/ListContainers
Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.863993429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3a73781e60a95e8b2a43149459448fd7c7e33dc7082e8230749f5479db18a37e,PodSandboxId:d1bc8541b182df3000fab9ab6672740aafcf7a7b30783934ee5699a6cd87946c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765567979389597466,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28bd2e4c-a606-45ae-bff8-93cc740702b2,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e57e904e6080a43f0c054c3ca11aacc514a784efd56578147be73d316fdc7363,PodSandboxId:003c1dd6f275a26397bde10bac721f2c972d32f9770501bdbc84ca1ebe403c43,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765567937896034608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86482a73-fed6-4ee2-93dd-8079de7542f0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ec93bc4394f3154447edaf2dccb4600daaf9b0499f3d4333a07e22e7d58673c,PodSandboxId:a46ac965878cf313218d8cc7223a8bdd5bb30542c5f584bc36d2861d6fb1f31e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765567922853623140,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-hppl2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7b5c91af-8fc7-4d47-875e-d78a54b2c59f,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5728418319c3823fde8ec0d5902908261a293f2719f68770fcf113e98bdce493,PodSandboxId:e5663746f2894d5e7c986087132b0c4043e181abee9a49a34a06356e29fe8c44,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e
,State:CONTAINER_EXITED,CreatedAt:1765567904755015404,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-twfg2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8f2f4060-276e-41bb-bed9-c734bf5967ce,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b90859836907690570ed8f4f5cbc16fb5d0b64660f4f1bae895049c4c8514d,PodSandboxId:f6b67a56ec25ed99a6254ee2918988d2702c5d588a8c4e96230d61c5cf974c24,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765567904553571552,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-pdz68,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 414c9a1f-9f9e-44b4-be77-987eadd5f18c,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2171eb5eb101a13d23803b5a8ebf0a86a8d17b45e7acbca8b43ba049f1b7512,PodSandboxId:a0471b2e05c0ab96a47f67c6fb7fdb8ee11a2762c72e3f21b7239a8255324897,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765567901140077918,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-4tdnr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c246219c-ebf0-4567-bacc-ed288d17a0e1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29cb209f08af1036fc97325bb4faaca22221448f614c69562527ca5dd4a9b13b,PodSandboxId:2dc32e7104f9974f471e97b0147337c2171986526e4fea85e43675d7f3ae83b2,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7
,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765567879411513042,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b04ee36-5eba-4b96-995d-1a77e2ddb46b,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f0e7375564e7015b2d15f50b51e3fb436315a9f6c364ec02fdd5c59190723c,PodSandboxId:f33474ff319cb99f546ffff3e938a58d9cff6269b69eda57c424b42ae86e876e,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0
,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765567859984510571,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-2xl4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede87043-19cb-485d-8eb9-d84d809cdc54,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa121a6614b9cb7f5e3f51937ba612d6ce2cf89d1dde25294b15039e14722e83,PodSandboxId:c9804addfe1ef68626ad31a2a5dddaca92997464acd44f1013af73e997e42e5d,Metadata:&ContainerMetad
ata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765567849703328907,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f852b24-b5fe-4b85-8007-74282a8e3746,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4a0b61582ab330327b4249677dc6b464244ac28ab0a195dc7dbdf4a6dbf6b28,PodSandboxId:e494954752c0ff47cccd80ea99aba276d5dcfc49147e30b0eaba4630ea02883b,Metadata:&ContainerMetadata{Name:cor
edns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765567843647449435,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vvxxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9292f5-1548-47ef-a76a-f488221712e1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b5a1c18d28614bf90454b948d289136b0f20c9341fe303e084b91bd607c3c0,PodSandboxId:46b7fe6eaddbb5a0738092f9932330558dd699058a0c1ecbdd81313662caae5e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765567842961644101,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x5bxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1efedaf7-228f-4318-bd8c-a85d80dd0b77,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3e4a1f5db77865ef545f2488910456b56b11d204019cb86b5f5c0cc1d270cc0,PodSandboxId:1c7dc9dc3acc41ce092c7cbaf82d78103ff5f0cb1e52591555183d9316bf9980,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765567830144455928,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce3b7a702fbef9a6121b414f85545a0,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":
\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da6f6f7fbcb408d5e35a803b554ef80fbe5b17a42b5c7b4dffc8e376aff7c5d3,PodSandboxId:5c3481bbc735f57e8e99bfc3693ca4746ed92c3acf843246ef44a7239802eeaf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765567830115776659,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e098927010764c91c96aa66fd9ba6efc,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993da190cfa742a9b8f23f3ae63ccae627606b22fdf705984f1cd8e26c9054f5,PodSandboxId:f0a593f81c8ac21d6b183fedd115a17bc50b72d69bf90826e37a050976ccbdee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765567830099509594,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40eda06
a75df0cd69d41a597946a693,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a66ca19ad2e53e2d61831c7925ed52e18848199736d2911d817c709827eda5,PodSandboxId:61e9bb926854de445e9d469dbc8eb0bf4a1494fd0965e66adf02270a40446bbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765567830089573242,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a47e93b8e8cc0ae4fe59ac4b3e6151,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fbdec8a7-2bfc-46eb-a06c-5c65d509a0a9 name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
3a73781e60a95 public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff 2 minutes ago Running nginx 0 d1bc8541b182d nginx default
e57e904e6080a gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 3 minutes ago Running busybox 0 003c1dd6f275a busybox default
9ec93bc4394f3 registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad 3 minutes ago Running controller 0 a46ac965878cf ingress-nginx-controller-85d4c799dd-hppl2 ingress-nginx
5728418319c38 a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e 3 minutes ago Exited patch 1 e5663746f2894 ingress-nginx-admission-patch-twfg2 ingress-nginx
27b9085983690 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285 3 minutes ago Exited create 0 f6b67a56ec25e ingress-nginx-admission-create-pdz68 ingress-nginx
b2171eb5eb101 docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef 3 minutes ago Running local-path-provisioner 0 a0471b2e05c0a local-path-provisioner-648f6765c9-4tdnr local-path-storage
29cb209f08af1 docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 4 minutes ago Running minikube-ingress-dns 0 2dc32e7104f99 kube-ingress-dns-minikube kube-system
23f0e7375564e docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 f33474ff319cb amd-gpu-device-plugin-2xl4r kube-system
aa121a6614b9c 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 c9804addfe1ef storage-provisioner kube-system
f4a0b61582ab3 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969 4 minutes ago Running coredns 0 e494954752c0f coredns-66bc5c9577-vvxxj kube-system
c5b5a1c18d286 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45 4 minutes ago Running kube-proxy 0 46b7fe6eaddbb kube-proxy-x5bxp kube-system
e3e4a1f5db778 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952 4 minutes ago Running kube-scheduler 0 1c7dc9dc3acc4 kube-scheduler-addons-347541 kube-system
da6f6f7fbcb40 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8 4 minutes ago Running kube-controller-manager 0 5c3481bbc735f kube-controller-manager-addons-347541 kube-system
993da190cfa74 a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1 4 minutes ago Running etcd 0 f0a593f81c8ac etcd-addons-347541 kube-system
e4a66ca19ad2e a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85 4 minutes ago Running kube-apiserver 0 61e9bb926854d kube-apiserver-addons-347541 kube-system
==> coredns [f4a0b61582ab330327b4249677dc6b464244ac28ab0a195dc7dbdf4a6dbf6b28] <==
[INFO] 10.244.0.8:46060 - 9455 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000227976s
[INFO] 10.244.0.8:46060 - 26533 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000106511s
[INFO] 10.244.0.8:46060 - 2919 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000124641s
[INFO] 10.244.0.8:46060 - 45545 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000167789s
[INFO] 10.244.0.8:46060 - 12019 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000234577s
[INFO] 10.244.0.8:46060 - 5588 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000114497s
[INFO] 10.244.0.8:46060 - 4388 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000107016s
[INFO] 10.244.0.8:38848 - 48219 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000106622s
[INFO] 10.244.0.8:38848 - 47875 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000315498s
[INFO] 10.244.0.8:56476 - 34446 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114442s
[INFO] 10.244.0.8:56476 - 34715 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000106716s
[INFO] 10.244.0.8:37845 - 40957 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000088945s
[INFO] 10.244.0.8:37845 - 40699 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000220127s
[INFO] 10.244.0.8:57229 - 20844 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000075743s
[INFO] 10.244.0.8:57229 - 21074 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000222914s
[INFO] 10.244.0.23:48114 - 7782 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000371664s
[INFO] 10.244.0.23:47280 - 11813 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00042697s
[INFO] 10.244.0.23:54307 - 61588 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000154412s
[INFO] 10.244.0.23:39259 - 52464 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000114222s
[INFO] 10.244.0.23:52429 - 21285 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000092595s
[INFO] 10.244.0.23:58630 - 9848 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00025888s
[INFO] 10.244.0.23:56600 - 1549 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004322962s
[INFO] 10.244.0.23:43005 - 10322 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.004139184s
[INFO] 10.244.0.26:54377 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000319335s
[INFO] 10.244.0.26:55768 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00039087s
==> describe nodes <==
Name: addons-347541
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-347541
kubernetes.io/os=linux
minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
minikube.k8s.io/name=addons-347541
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_12T19_30_37_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-347541
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 12 Dec 2025 19:30:33 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-347541
AcquireTime: <unset>
RenewTime: Fri, 12 Dec 2025 19:35:22 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 12 Dec 2025 19:33:10 +0000 Fri, 12 Dec 2025 19:30:30 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 12 Dec 2025 19:33:10 +0000 Fri, 12 Dec 2025 19:30:30 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 12 Dec 2025 19:33:10 +0000 Fri, 12 Dec 2025 19:30:30 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 12 Dec 2025 19:33:10 +0000 Fri, 12 Dec 2025 19:30:38 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.202
Hostname: addons-347541
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
System Info:
Machine ID: b1fb684fda1f46759f4baa96973add54
System UUID: b1fb684f-da1f-4675-9f4b-aa96973add54
Boot ID: 52ef5230-d59b-4f34-a260-06b6298107c5
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.34.2
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (14 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m10s
default hello-world-app-5d498dc89-qwv5d 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m29s
ingress-nginx ingress-nginx-controller-85d4c799dd-hppl2 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m32s
kube-system amd-gpu-device-plugin-2xl4r 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m38s
kube-system coredns-66bc5c9577-vvxxj 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m41s
kube-system etcd-addons-347541 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m46s
kube-system kube-apiserver-addons-347541 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m46s
kube-system kube-controller-manager-addons-347541 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m46s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m35s
kube-system kube-proxy-x5bxp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m41s
kube-system kube-scheduler-addons-347541 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m46s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m35s
local-path-storage local-path-provisioner-648f6765c9-4tdnr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m34s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m39s kube-proxy
Normal Starting 4m54s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m54s (x8 over 4m54s) kubelet Node addons-347541 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m54s (x8 over 4m54s) kubelet Node addons-347541 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m54s (x7 over 4m54s) kubelet Node addons-347541 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m54s kubelet Updated Node Allocatable limit across pods
Normal Starting 4m46s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m46s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m46s kubelet Node addons-347541 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m46s kubelet Node addons-347541 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m46s kubelet Node addons-347541 status is now: NodeHasSufficientPID
Normal NodeReady 4m45s kubelet Node addons-347541 status is now: NodeReady
Normal RegisteredNode 4m42s node-controller Node addons-347541 event: Registered Node addons-347541 in Controller
==> dmesg <==
[ +0.479536] kauditd_printk_skb: 18 callbacks suppressed
[ +0.978987] kauditd_printk_skb: 318 callbacks suppressed
[ +0.393842] kauditd_printk_skb: 380 callbacks suppressed
[ +1.060454] kauditd_printk_skb: 315 callbacks suppressed
[Dec12 19:31] kauditd_printk_skb: 7 callbacks suppressed
[ +13.128736] kauditd_printk_skb: 32 callbacks suppressed
[ +5.326519] kauditd_printk_skb: 11 callbacks suppressed
[ +7.131008] kauditd_printk_skb: 32 callbacks suppressed
[ +6.163427] kauditd_printk_skb: 86 callbacks suppressed
[ +5.021516] kauditd_printk_skb: 26 callbacks suppressed
[ +1.550965] kauditd_printk_skb: 121 callbacks suppressed
[ +1.002231] kauditd_printk_skb: 140 callbacks suppressed
[Dec12 19:32] kauditd_printk_skb: 61 callbacks suppressed
[ +9.440573] kauditd_printk_skb: 68 callbacks suppressed
[ +2.052464] kauditd_printk_skb: 53 callbacks suppressed
[ +10.765154] kauditd_printk_skb: 11 callbacks suppressed
[ +5.903422] kauditd_printk_skb: 22 callbacks suppressed
[ +4.721002] kauditd_printk_skb: 38 callbacks suppressed
[ +0.000033] kauditd_printk_skb: 69 callbacks suppressed
[ +1.512297] kauditd_printk_skb: 129 callbacks suppressed
[ +3.578938] kauditd_printk_skb: 204 callbacks suppressed
[Dec12 19:33] kauditd_printk_skb: 120 callbacks suppressed
[ +0.000050] kauditd_printk_skb: 83 callbacks suppressed
[ +5.850244] kauditd_printk_skb: 41 callbacks suppressed
[Dec12 19:35] kauditd_printk_skb: 127 callbacks suppressed
==> etcd [993da190cfa742a9b8f23f3ae63ccae627606b22fdf705984f1cd8e26c9054f5] <==
{"level":"info","ts":"2025-12-12T19:31:36.266314Z","caller":"traceutil/trace.go:172","msg":"trace[644682985] transaction","detail":"{read_only:false; response_revision:1026; number_of_response:1; }","duration":"124.31474ms","start":"2025-12-12T19:31:36.141985Z","end":"2025-12-12T19:31:36.266300Z","steps":["trace[644682985] 'process raft request' (duration: 124.180884ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-12T19:31:49.684443Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.231496ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourceclaims\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-12T19:31:49.684509Z","caller":"traceutil/trace.go:172","msg":"trace[484670056] range","detail":"{range_begin:/registry/resourceclaims; range_end:; response_count:0; response_revision:1101; }","duration":"182.306912ms","start":"2025-12-12T19:31:49.502192Z","end":"2025-12-12T19:31:49.684499Z","steps":["trace[484670056] 'range keys from in-memory index tree' (duration: 182.109918ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-12T19:31:49.684669Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.261649ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-12T19:31:49.684708Z","caller":"traceutil/trace.go:172","msg":"trace[1925181992] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1101; }","duration":"138.300427ms","start":"2025-12-12T19:31:49.546400Z","end":"2025-12-12T19:31:49.684701Z","steps":["trace[1925181992] 'range keys from in-memory index tree' (duration: 138.222072ms)"],"step_count":1}
{"level":"info","ts":"2025-12-12T19:32:02.699962Z","caller":"traceutil/trace.go:172","msg":"trace[292230520] linearizableReadLoop","detail":"{readStateIndex:1206; appliedIndex:1206; }","duration":"154.3551ms","start":"2025-12-12T19:32:02.545567Z","end":"2025-12-12T19:32:02.699923Z","steps":["trace[292230520] 'read index received' (duration: 154.349023ms)","trace[292230520] 'applied index is now lower than readState.Index' (duration: 5.275µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-12T19:32:02.700125Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.54331ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-12T19:32:02.700146Z","caller":"traceutil/trace.go:172","msg":"trace[2051985376] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1172; }","duration":"154.576987ms","start":"2025-12-12T19:32:02.545563Z","end":"2025-12-12T19:32:02.700140Z","steps":["trace[2051985376] 'agreement among raft nodes before linearized reading' (duration: 154.516893ms)"],"step_count":1}
{"level":"info","ts":"2025-12-12T19:32:02.700148Z","caller":"traceutil/trace.go:172","msg":"trace[1238524007] transaction","detail":"{read_only:false; response_revision:1173; number_of_response:1; }","duration":"282.517082ms","start":"2025-12-12T19:32:02.417621Z","end":"2025-12-12T19:32:02.700138Z","steps":["trace[1238524007] 'process raft request' (duration: 282.423215ms)"],"step_count":1}
{"level":"info","ts":"2025-12-12T19:32:38.516037Z","caller":"traceutil/trace.go:172","msg":"trace[2131806319] transaction","detail":"{read_only:false; response_revision:1370; number_of_response:1; }","duration":"102.295139ms","start":"2025-12-12T19:32:38.413727Z","end":"2025-12-12T19:32:38.516022Z","steps":["trace[2131806319] 'process raft request' (duration: 102.193907ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-12T19:32:40.233037Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"258.712205ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-12T19:32:40.233098Z","caller":"traceutil/trace.go:172","msg":"trace[234955170] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1387; }","duration":"258.785979ms","start":"2025-12-12T19:32:39.974300Z","end":"2025-12-12T19:32:40.233086Z","steps":["trace[234955170] 'agreement among raft nodes before linearized reading' (duration: 28.947629ms)","trace[234955170] 'range keys from in-memory index tree' (duration: 229.734196ms)"],"step_count":2}
{"level":"warn","ts":"2025-12-12T19:32:40.233382Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"229.808262ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7391140448856292128 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/metrics-server\" mod_revision:599 > success:<request_delete_range:<key:\"/registry/serviceaccounts/kube-system/metrics-server\" > > failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/metrics-server\" > >>","response":"size:18"}
{"level":"info","ts":"2025-12-12T19:32:40.233436Z","caller":"traceutil/trace.go:172","msg":"trace[1904781771] linearizableReadLoop","detail":"{readStateIndex:1430; appliedIndex:1429; }","duration":"230.233016ms","start":"2025-12-12T19:32:40.003196Z","end":"2025-12-12T19:32:40.233429Z","steps":["trace[1904781771] 'read index received' (duration: 39.799µs)","trace[1904781771] 'applied index is now lower than readState.Index' (duration: 230.192774ms)"],"step_count":2}
{"level":"info","ts":"2025-12-12T19:32:40.233769Z","caller":"traceutil/trace.go:172","msg":"trace[1977029529] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1388; }","duration":"279.207722ms","start":"2025-12-12T19:32:39.954551Z","end":"2025-12-12T19:32:40.233759Z","steps":["trace[1977029529] 'process raft request' (duration: 48.735418ms)","trace[1977029529] 'compare' (duration: 229.635665ms)"],"step_count":2}
{"level":"warn","ts":"2025-12-12T19:32:40.233978Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"254.600086ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/yakd-dashboard/yakd-dashboard-5ff678cb9-g8n6k\" limit:1 ","response":"range_response_count:1 size:4671"}
{"level":"info","ts":"2025-12-12T19:32:40.233996Z","caller":"traceutil/trace.go:172","msg":"trace[1926552127] range","detail":"{range_begin:/registry/pods/yakd-dashboard/yakd-dashboard-5ff678cb9-g8n6k; range_end:; response_count:1; response_revision:1388; }","duration":"254.622071ms","start":"2025-12-12T19:32:39.979369Z","end":"2025-12-12T19:32:40.233992Z","steps":["trace[1926552127] 'agreement among raft nodes before linearized reading' (duration: 254.543767ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-12T19:32:40.234119Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"255.307343ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-85b7d694d7-tmr5k\" limit:1 ","response":"range_response_count:1 size:4650"}
{"level":"info","ts":"2025-12-12T19:32:40.234132Z","caller":"traceutil/trace.go:172","msg":"trace[1772879783] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-85b7d694d7-tmr5k; range_end:; response_count:1; response_revision:1388; }","duration":"255.323133ms","start":"2025-12-12T19:32:39.978805Z","end":"2025-12-12T19:32:40.234128Z","steps":["trace[1772879783] 'agreement among raft nodes before linearized reading' (duration: 255.273723ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-12T19:32:40.234294Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"153.505703ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-12T19:32:40.234311Z","caller":"traceutil/trace.go:172","msg":"trace[2075638050] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1388; }","duration":"153.524604ms","start":"2025-12-12T19:32:40.080782Z","end":"2025-12-12T19:32:40.234307Z","steps":["trace[2075638050] 'agreement among raft nodes before linearized reading' (duration: 153.491733ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-12T19:33:33.154421Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.378021ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" limit:1 ","response":"range_response_count:1 size:350"}
{"level":"info","ts":"2025-12-12T19:33:33.154506Z","caller":"traceutil/trace.go:172","msg":"trace[858335737] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:1864; }","duration":"112.476775ms","start":"2025-12-12T19:33:33.042019Z","end":"2025-12-12T19:33:33.154496Z","steps":["trace[858335737] 'range keys from in-memory index tree' (duration: 112.157622ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-12T19:33:33.154826Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.883006ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-12T19:33:33.154912Z","caller":"traceutil/trace.go:172","msg":"trace[626470589] range","detail":"{range_begin:/registry/controllerrevisions; range_end:; response_count:0; response_revision:1864; }","duration":"113.002623ms","start":"2025-12-12T19:33:33.041901Z","end":"2025-12-12T19:33:33.154904Z","steps":["trace[626470589] 'range keys from in-memory index tree' (duration: 112.82436ms)"],"step_count":1}
==> kernel <==
19:35:23 up 5 min, 0 users, load average: 0.21, 0.72, 0.40
Linux addons-347541 6.6.95 #1 SMP PREEMPT_DYNAMIC Fri Dec 12 05:38:44 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [e4a66ca19ad2e53e2d61831c7925ed52e18848199736d2911d817c709827eda5] <==
> logger="UnhandledError"
E1212 19:31:28.477186 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.71.121:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.71.121:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.71.121:443: connect: connection refused" logger="UnhandledError"
E1212 19:31:28.477854 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.71.121:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.71.121:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.71.121:443: connect: connection refused" logger="UnhandledError"
I1212 19:31:28.551841 1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E1212 19:32:25.520624 1 conn.go:339] Error on socket receive: read tcp 192.168.39.202:8443->192.168.39.1:57530: use of closed network connection
E1212 19:32:25.699849 1 conn.go:339] Error on socket receive: read tcp 192.168.39.202:8443->192.168.39.1:57572: use of closed network connection
I1212 19:32:34.805123 1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.84.172"}
I1212 19:32:54.111764 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
I1212 19:32:54.300308 1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.209.242"}
I1212 19:33:07.502455 1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I1212 19:33:29.486902 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I1212 19:33:31.596941 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1212 19:33:31.597423 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1212 19:33:31.629618 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1212 19:33:31.631785 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1212 19:33:31.664026 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1212 19:33:31.664087 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1212 19:33:31.674804 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1212 19:33:31.674900 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1212 19:33:31.801159 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1212 19:33:31.801370 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W1212 19:33:32.664286 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W1212 19:33:32.802612 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W1212 19:33:32.826429 1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
I1212 19:35:21.815562 1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.90.11"}
==> kube-controller-manager [da6f6f7fbcb408d5e35a803b554ef80fbe5b17a42b5c7b4dffc8e376aff7c5d3] <==
I1212 19:33:41.109270 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1212 19:33:41.154742 1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
I1212 19:33:41.154782 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
E1212 19:33:41.835177 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1212 19:33:41.836457 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1212 19:33:43.720866 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1212 19:33:43.721784 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1212 19:33:47.531952 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1212 19:33:47.532933 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1212 19:33:51.157880 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1212 19:33:51.158941 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1212 19:33:54.920200 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1212 19:33:54.921164 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1212 19:34:06.741556 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1212 19:34:06.742762 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1212 19:34:16.338958 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1212 19:34:16.339884 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1212 19:34:16.341809 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1212 19:34:16.342720 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1212 19:34:38.681703 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1212 19:34:38.683080 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1212 19:34:45.786612 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1212 19:34:45.787616 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1212 19:34:59.010192 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1212 19:34:59.011419 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
==> kube-proxy [c5b5a1c18d28614bf90454b948d289136b0f20c9341fe303e084b91bd607c3c0] <==
I1212 19:30:43.661474 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1212 19:30:43.762593 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1212 19:30:43.763408 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.202"]
E1212 19:30:43.764264 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1212 19:30:43.949198 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1212 19:30:43.949588 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1212 19:30:43.949914 1 server_linux.go:132] "Using iptables Proxier"
I1212 19:30:43.966040 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1212 19:30:43.967118 1 server.go:527] "Version info" version="v1.34.2"
I1212 19:30:43.968073 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1212 19:30:43.972932 1 config.go:200] "Starting service config controller"
I1212 19:30:43.972957 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1212 19:30:43.972972 1 config.go:106] "Starting endpoint slice config controller"
I1212 19:30:43.972975 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1212 19:30:43.972985 1 config.go:403] "Starting serviceCIDR config controller"
I1212 19:30:43.972988 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1212 19:30:43.979580 1 config.go:309] "Starting node config controller"
I1212 19:30:43.979605 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1212 19:30:43.979670 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1212 19:30:44.073958 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1212 19:30:44.073976 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1212 19:30:44.073990 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-scheduler [e3e4a1f5db77865ef545f2488910456b56b11d204019cb86b5f5c0cc1d270cc0] <==
E1212 19:30:33.517815 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1212 19:30:33.517970 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1212 19:30:33.518139 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1212 19:30:33.518183 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1212 19:30:33.518797 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1212 19:30:34.322323 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1212 19:30:34.372754 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1212 19:30:34.383137 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1212 19:30:34.413931 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1212 19:30:34.433357 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1212 19:30:34.488002 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1212 19:30:34.514004 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1212 19:30:34.522340 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1212 19:30:34.549670 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1212 19:30:34.679731 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1212 19:30:34.703912 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1212 19:30:34.713458 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1212 19:30:34.714105 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1212 19:30:34.844773 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1212 19:30:34.944068 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1212 19:30:34.994477 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1212 19:30:35.021036 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1212 19:30:35.067265 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1212 19:30:35.114701 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
I1212 19:30:37.507973 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Dec 12 19:33:40 addons-347541 kubelet[1502]: I1212 19:33:40.665540 1502 scope.go:117] "RemoveContainer" containerID="cf69239fad9f106ef4c497631e55fa6e89f2319c68106ffc2cdeffa0be2d0619"
Dec 12 19:33:40 addons-347541 kubelet[1502]: I1212 19:33:40.785182 1502 scope.go:117] "RemoveContainer" containerID="3b3e3f1179bc8c11c310a9f2033fb3f323372b82c6aea0fc1f9032b498d6c8d7"
Dec 12 19:33:47 addons-347541 kubelet[1502]: E1212 19:33:47.291488 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765568027290683622 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 12 19:33:47 addons-347541 kubelet[1502]: E1212 19:33:47.291537 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765568027290683622 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 12 19:33:57 addons-347541 kubelet[1502]: E1212 19:33:57.295763 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765568037295300766 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 12 19:33:57 addons-347541 kubelet[1502]: E1212 19:33:57.295802 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765568037295300766 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 12 19:34:07 addons-347541 kubelet[1502]: E1212 19:34:07.297966 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765568047297568250 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 12 19:34:07 addons-347541 kubelet[1502]: E1212 19:34:07.297991 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765568047297568250 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 12 19:34:17 addons-347541 kubelet[1502]: E1212 19:34:17.300561 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765568057300065143 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 12 19:34:17 addons-347541 kubelet[1502]: E1212 19:34:17.300587 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765568057300065143 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 12 19:34:27 addons-347541 kubelet[1502]: E1212 19:34:27.303520 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765568067302782573 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 12 19:34:27 addons-347541 kubelet[1502]: E1212 19:34:27.303543 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765568067302782573 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 12 19:34:37 addons-347541 kubelet[1502]: E1212 19:34:37.307495 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765568077307235351 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 12 19:34:37 addons-347541 kubelet[1502]: E1212 19:34:37.307514 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765568077307235351 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 12 19:34:47 addons-347541 kubelet[1502]: E1212 19:34:47.310028 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765568087309721205 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 12 19:34:47 addons-347541 kubelet[1502]: E1212 19:34:47.310049 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765568087309721205 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 12 19:34:50 addons-347541 kubelet[1502]: I1212 19:34:50.079912 1502 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-2xl4r" secret="" err="secret \"gcp-auth\" not found"
Dec 12 19:34:57 addons-347541 kubelet[1502]: E1212 19:34:57.313262 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765568097312553772 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 12 19:34:57 addons-347541 kubelet[1502]: E1212 19:34:57.313284 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765568097312553772 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 12 19:34:59 addons-347541 kubelet[1502]: I1212 19:34:59.079330 1502 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Dec 12 19:35:07 addons-347541 kubelet[1502]: E1212 19:35:07.316532 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765568107316101700 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 12 19:35:07 addons-347541 kubelet[1502]: E1212 19:35:07.316554 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765568107316101700 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 12 19:35:17 addons-347541 kubelet[1502]: E1212 19:35:17.319545 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765568117319172287 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 12 19:35:17 addons-347541 kubelet[1502]: E1212 19:35:17.319582 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765568117319172287 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 12 19:35:21 addons-347541 kubelet[1502]: I1212 19:35:21.794355 1502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjtdj\" (UniqueName: \"kubernetes.io/projected/294e2720-bcd8-4163-9911-1ef5a6bbc9ba-kube-api-access-mjtdj\") pod \"hello-world-app-5d498dc89-qwv5d\" (UID: \"294e2720-bcd8-4163-9911-1ef5a6bbc9ba\") " pod="default/hello-world-app-5d498dc89-qwv5d"
==> storage-provisioner [aa121a6614b9cb7f5e3f51937ba612d6ce2cf89d1dde25294b15039e14722e83] <==
W1212 19:34:57.688145 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1212 19:34:59.691582 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1212 19:34:59.695818 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1212 19:35:01.699198 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1212 19:35:01.706917 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1212 19:35:03.709926 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1212 19:35:03.714810 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1212 19:35:05.717664 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1212 19:35:05.724341 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1212 19:35:07.727671 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1212 19:35:07.732096 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1212 19:35:09.735653 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1212 19:35:09.742922 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1212 19:35:11.747057 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1212 19:35:11.751522 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1212 19:35:13.754866 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1212 19:35:13.761404 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1212 19:35:15.764568 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1212 19:35:15.769199 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1212 19:35:17.772055 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1212 19:35:17.777084 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1212 19:35:19.780276 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1212 19:35:19.784786 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1212 19:35:21.788822 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1212 19:35:21.799347 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-347541 -n addons-347541
helpers_test.go:270: (dbg) Run: kubectl --context addons-347541 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-qwv5d ingress-nginx-admission-create-pdz68 ingress-nginx-admission-patch-twfg2
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run: kubectl --context addons-347541 describe pod hello-world-app-5d498dc89-qwv5d ingress-nginx-admission-create-pdz68 ingress-nginx-admission-patch-twfg2
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-347541 describe pod hello-world-app-5d498dc89-qwv5d ingress-nginx-admission-create-pdz68 ingress-nginx-admission-patch-twfg2: exit status 1 (82.609843ms)
-- stdout --
Name: hello-world-app-5d498dc89-qwv5d
Namespace: default
Priority: 0
Service Account: default
Node: addons-347541/192.168.39.202
Start Time: Fri, 12 Dec 2025 19:35:21 +0000
Labels: app=hello-world-app
pod-template-hash=5d498dc89
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-world-app-5d498dc89
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mjtdj (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-mjtdj:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2s default-scheduler Successfully assigned default/hello-world-app-5d498dc89-qwv5d to addons-347541
Normal Pulling 1s kubelet Pulling image "docker.io/kicbase/echo-server:1.0"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-pdz68" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-twfg2" not found
** /stderr **
helpers_test.go:288: kubectl --context addons-347541 describe pod hello-world-app-5d498dc89-qwv5d ingress-nginx-admission-create-pdz68 ingress-nginx-admission-patch-twfg2: exit status 1
addons_test.go:1055: (dbg) Run: out/minikube-linux-amd64 -p addons-347541 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-347541 addons disable ingress-dns --alsologtostderr -v=1: (1.649720367s)
addons_test.go:1055: (dbg) Run: out/minikube-linux-amd64 -p addons-347541 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-347541 addons disable ingress --alsologtostderr -v=1: (7.68881061s)
--- FAIL: TestAddons/parallel/Ingress (159.37s)