=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run: kubectl --context addons-703051 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run: kubectl --context addons-703051 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run: kubectl --context addons-703051 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [6ecb2063-1677-48a3-8f27-ea2c7d5c93c6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [6ecb2063-1677-48a3-8f27-ea2c7d5c93c6] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.002592889s
I1216 02:28:58.021574 8974 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run: out/minikube-linux-amd64 -p addons-703051 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-703051 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.950482167s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run: kubectl --context addons-703051 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run: out/minikube-linux-amd64 -p addons-703051 ip
addons_test.go:301: (dbg) Run: nslookup hello-john.test 192.168.39.237
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-703051 -n addons-703051
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-amd64 -p addons-703051 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-703051 logs -n 25: (1.049334083s)
helpers_test.go:261: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ delete │ -p download-only-325050 │ download-only-325050 │ jenkins │ v1.37.0 │ 16 Dec 25 02:26 UTC │ 16 Dec 25 02:26 UTC │
│ start │ --download-only -p binary-mirror-911494 --alsologtostderr --binary-mirror http://127.0.0.1:37719 --driver=kvm2 --container-runtime=crio │ binary-mirror-911494 │ jenkins │ v1.37.0 │ 16 Dec 25 02:26 UTC │ │
│ delete │ -p binary-mirror-911494 │ binary-mirror-911494 │ jenkins │ v1.37.0 │ 16 Dec 25 02:26 UTC │ 16 Dec 25 02:26 UTC │
│ addons │ disable dashboard -p addons-703051 │ addons-703051 │ jenkins │ v1.37.0 │ 16 Dec 25 02:26 UTC │ │
│ addons │ enable dashboard -p addons-703051 │ addons-703051 │ jenkins │ v1.37.0 │ 16 Dec 25 02:26 UTC │ │
│ start │ -p addons-703051 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-703051 │ jenkins │ v1.37.0 │ 16 Dec 25 02:26 UTC │ 16 Dec 25 02:28 UTC │
│ addons │ addons-703051 addons disable volcano --alsologtostderr -v=1 │ addons-703051 │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
│ addons │ addons-703051 addons disable gcp-auth --alsologtostderr -v=1 │ addons-703051 │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
│ addons │ enable headlamp -p addons-703051 --alsologtostderr -v=1 │ addons-703051 │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
│ addons │ addons-703051 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-703051 │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
│ addons │ addons-703051 addons disable yakd --alsologtostderr -v=1 │ addons-703051 │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
│ ssh │ addons-703051 ssh cat /opt/local-path-provisioner/pvc-f9648a3b-9c51-449d-b8e4-4a857e52bcbe_default_test-pvc/file1 │ addons-703051 │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
│ addons │ addons-703051 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-703051 │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
│ addons │ addons-703051 addons disable headlamp --alsologtostderr -v=1 │ addons-703051 │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
│ ip │ addons-703051 ip │ addons-703051 │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
│ addons │ addons-703051 addons disable registry --alsologtostderr -v=1 │ addons-703051 │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
│ addons │ addons-703051 addons disable metrics-server --alsologtostderr -v=1 │ addons-703051 │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
│ addons │ addons-703051 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-703051 │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
│ addons │ addons-703051 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-703051 │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-703051 │ addons-703051 │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
│ addons │ addons-703051 addons disable registry-creds --alsologtostderr -v=1 │ addons-703051 │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
│ ssh │ addons-703051 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-703051 │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ │
│ addons │ addons-703051 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-703051 │ jenkins │ v1.37.0 │ 16 Dec 25 02:29 UTC │ 16 Dec 25 02:29 UTC │
│ addons │ addons-703051 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-703051 │ jenkins │ v1.37.0 │ 16 Dec 25 02:29 UTC │ 16 Dec 25 02:29 UTC │
│ ip │ addons-703051 ip │ addons-703051 │ jenkins │ v1.37.0 │ 16 Dec 25 02:31 UTC │ 16 Dec 25 02:31 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/16 02:26:03
Running on machine: ubuntu-20-agent-8
Binary: Built with gc go1.25.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1216 02:26:03.746296 9897 out.go:360] Setting OutFile to fd 1 ...
I1216 02:26:03.746397 9897 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:26:03.746405 9897 out.go:374] Setting ErrFile to fd 2...
I1216 02:26:03.746409 9897 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:26:03.746608 9897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
I1216 02:26:03.747101 9897 out.go:368] Setting JSON to false
I1216 02:26:03.747841 9897 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":509,"bootTime":1765851455,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1216 02:26:03.747893 9897 start.go:143] virtualization: kvm guest
I1216 02:26:03.749692 9897 out.go:179] * [addons-703051] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1216 02:26:03.751621 9897 out.go:179] - MINIKUBE_LOCATION=22158
I1216 02:26:03.751399 9897 notify.go:221] Checking for updates...
I1216 02:26:03.753838 9897 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1216 02:26:03.754983 9897 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22158-5036/kubeconfig
I1216 02:26:03.756001 9897 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5036/.minikube
I1216 02:26:03.757055 9897 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1216 02:26:03.758092 9897 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1216 02:26:03.759341 9897 driver.go:422] Setting default libvirt URI to qemu:///system
I1216 02:26:03.786847 9897 out.go:179] * Using the kvm2 driver based on user configuration
I1216 02:26:03.787791 9897 start.go:309] selected driver: kvm2
I1216 02:26:03.787801 9897 start.go:927] validating driver "kvm2" against <nil>
I1216 02:26:03.787810 9897 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1216 02:26:03.788464 9897 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1216 02:26:03.788675 9897 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1216 02:26:03.788700 9897 cni.go:84] Creating CNI manager for ""
I1216 02:26:03.788741 9897 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1216 02:26:03.788749 9897 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1216 02:26:03.788782 9897 start.go:353] cluster config:
{Name:addons-703051 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-703051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
I1216 02:26:03.788876 9897 iso.go:125] acquiring lock: {Name:mk055aa36b1051bc664b283a8a6fb2af4db94c44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1216 02:26:03.790149 9897 out.go:179] * Starting "addons-703051" primary control-plane node in "addons-703051" cluster
I1216 02:26:03.791150 9897 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1216 02:26:03.791178 9897 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5036/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
I1216 02:26:03.791183 9897 cache.go:65] Caching tarball of preloaded images
I1216 02:26:03.791247 9897 preload.go:238] Found /home/jenkins/minikube-integration/22158-5036/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1216 02:26:03.791257 9897 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
I1216 02:26:03.791518 9897 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/config.json ...
I1216 02:26:03.791537 9897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/config.json: {Name:mkdc721774d5722ea61b35495cae8f72a0381294 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 02:26:03.791657 9897 start.go:360] acquireMachinesLock for addons-703051: {Name:mk6501572e7fc03699ef9d932e34f995d8ad6f98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1216 02:26:03.791710 9897 start.go:364] duration metric: took 41.49µs to acquireMachinesLock for "addons-703051"
I1216 02:26:03.791727 9897 start.go:93] Provisioning new machine with config: &{Name:addons-703051 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-703051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1216 02:26:03.791767 9897 start.go:125] createHost starting for "" (driver="kvm2")
I1216 02:26:03.793122 9897 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1216 02:26:03.793287 9897 start.go:159] libmachine.API.Create for "addons-703051" (driver="kvm2")
I1216 02:26:03.793315 9897 client.go:173] LocalClient.Create starting
I1216 02:26:03.793394 9897 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem
I1216 02:26:03.880330 9897 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/cert.pem
I1216 02:26:04.032060 9897 main.go:143] libmachine: creating domain...
I1216 02:26:04.032081 9897 main.go:143] libmachine: creating network...
I1216 02:26:04.033639 9897 main.go:143] libmachine: found existing default network
I1216 02:26:04.033868 9897 main.go:143] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1216 02:26:04.034432 9897 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c90220}
I1216 02:26:04.034521 9897 main.go:143] libmachine: defining private network:
<network>
<name>mk-addons-703051</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1216 02:26:04.040527 9897 main.go:143] libmachine: creating private network mk-addons-703051 192.168.39.0/24...
I1216 02:26:04.102113 9897 main.go:143] libmachine: private network mk-addons-703051 192.168.39.0/24 created
I1216 02:26:04.102404 9897 main.go:143] libmachine: <network>
<name>mk-addons-703051</name>
<uuid>96a0ff09-8e21-4333-9498-c46934f922dd</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:30:99:f8'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1216 02:26:04.102438 9897 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051 ...
I1216 02:26:04.102461 9897 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22158-5036/.minikube/cache/iso/amd64/minikube-v1.37.0-1765836331-22158-amd64.iso
I1216 02:26:04.102471 9897 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22158-5036/.minikube
I1216 02:26:04.102532 9897 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22158-5036/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22158-5036/.minikube/cache/iso/amd64/minikube-v1.37.0-1765836331-22158-amd64.iso...
I1216 02:26:04.357110 9897 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa...
I1216 02:26:04.493537 9897 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/addons-703051.rawdisk...
I1216 02:26:04.493573 9897 main.go:143] libmachine: Writing magic tar header
I1216 02:26:04.493591 9897 main.go:143] libmachine: Writing SSH key tar header
I1216 02:26:04.493659 9897 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051 ...
I1216 02:26:04.493713 9897 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051
I1216 02:26:04.493747 9897 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051 (perms=drwx------)
I1216 02:26:04.493763 9897 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22158-5036/.minikube/machines
I1216 02:26:04.493775 9897 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22158-5036/.minikube/machines (perms=drwxr-xr-x)
I1216 02:26:04.493788 9897 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22158-5036/.minikube
I1216 02:26:04.493798 9897 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22158-5036/.minikube (perms=drwxr-xr-x)
I1216 02:26:04.493806 9897 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22158-5036
I1216 02:26:04.493814 9897 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22158-5036 (perms=drwxrwxr-x)
I1216 02:26:04.493825 9897 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1216 02:26:04.493834 9897 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1216 02:26:04.493851 9897 main.go:143] libmachine: checking permissions on dir: /home/jenkins
I1216 02:26:04.493861 9897 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1216 02:26:04.493870 9897 main.go:143] libmachine: checking permissions on dir: /home
I1216 02:26:04.493879 9897 main.go:143] libmachine: skipping /home - not owner
I1216 02:26:04.493883 9897 main.go:143] libmachine: defining domain...
I1216 02:26:04.495130 9897 main.go:143] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-703051</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/addons-703051.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-703051'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1216 02:26:04.502470 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:d0:cf:e3 in network default
I1216 02:26:04.503142 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:04.503160 9897 main.go:143] libmachine: starting domain...
I1216 02:26:04.503164 9897 main.go:143] libmachine: ensuring networks are active...
I1216 02:26:04.503855 9897 main.go:143] libmachine: Ensuring network default is active
I1216 02:26:04.504280 9897 main.go:143] libmachine: Ensuring network mk-addons-703051 is active
I1216 02:26:04.504973 9897 main.go:143] libmachine: getting domain XML...
I1216 02:26:04.506231 9897 main.go:143] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-703051</name>
<uuid>c4ab45a7-215f-430e-bc6b-14f6c9c94339</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/addons-703051.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:7a:59:00'/>
<source network='mk-addons-703051'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:d0:cf:e3'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1216 02:26:05.754583 9897 main.go:143] libmachine: waiting for domain to start...
I1216 02:26:05.755685 9897 main.go:143] libmachine: domain is now running
I1216 02:26:05.755699 9897 main.go:143] libmachine: waiting for IP...
I1216 02:26:05.756381 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:05.756847 9897 main.go:143] libmachine: no network interface addresses found for domain addons-703051 (source=lease)
I1216 02:26:05.756859 9897 main.go:143] libmachine: trying to list again with source=arp
I1216 02:26:05.757132 9897 main.go:143] libmachine: unable to find current IP address of domain addons-703051 in network mk-addons-703051 (interfaces detected: [])
I1216 02:26:05.757164 9897 retry.go:31] will retry after 194.356704ms: waiting for domain to come up
I1216 02:26:05.953523 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:05.954069 9897 main.go:143] libmachine: no network interface addresses found for domain addons-703051 (source=lease)
I1216 02:26:05.954086 9897 main.go:143] libmachine: trying to list again with source=arp
I1216 02:26:05.954380 9897 main.go:143] libmachine: unable to find current IP address of domain addons-703051 in network mk-addons-703051 (interfaces detected: [])
I1216 02:26:05.954421 9897 retry.go:31] will retry after 363.516423ms: waiting for domain to come up
I1216 02:26:06.319807 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:06.320279 9897 main.go:143] libmachine: no network interface addresses found for domain addons-703051 (source=lease)
I1216 02:26:06.320293 9897 main.go:143] libmachine: trying to list again with source=arp
I1216 02:26:06.320536 9897 main.go:143] libmachine: unable to find current IP address of domain addons-703051 in network mk-addons-703051 (interfaces detected: [])
I1216 02:26:06.320567 9897 retry.go:31] will retry after 436.798052ms: waiting for domain to come up
I1216 02:26:06.759226 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:06.759840 9897 main.go:143] libmachine: no network interface addresses found for domain addons-703051 (source=lease)
I1216 02:26:06.759855 9897 main.go:143] libmachine: trying to list again with source=arp
I1216 02:26:06.760212 9897 main.go:143] libmachine: unable to find current IP address of domain addons-703051 in network mk-addons-703051 (interfaces detected: [])
I1216 02:26:06.760245 9897 retry.go:31] will retry after 403.662247ms: waiting for domain to come up
I1216 02:26:07.165830 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:07.166400 9897 main.go:143] libmachine: no network interface addresses found for domain addons-703051 (source=lease)
I1216 02:26:07.166415 9897 main.go:143] libmachine: trying to list again with source=arp
I1216 02:26:07.166676 9897 main.go:143] libmachine: unable to find current IP address of domain addons-703051 in network mk-addons-703051 (interfaces detected: [])
I1216 02:26:07.166705 9897 retry.go:31] will retry after 481.547373ms: waiting for domain to come up
I1216 02:26:07.649835 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:07.650570 9897 main.go:143] libmachine: no network interface addresses found for domain addons-703051 (source=lease)
I1216 02:26:07.650595 9897 main.go:143] libmachine: trying to list again with source=arp
I1216 02:26:07.651002 9897 main.go:143] libmachine: unable to find current IP address of domain addons-703051 in network mk-addons-703051 (interfaces detected: [])
I1216 02:26:07.651036 9897 retry.go:31] will retry after 630.696287ms: waiting for domain to come up
I1216 02:26:08.282796 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:08.283364 9897 main.go:143] libmachine: no network interface addresses found for domain addons-703051 (source=lease)
I1216 02:26:08.283378 9897 main.go:143] libmachine: trying to list again with source=arp
I1216 02:26:08.283654 9897 main.go:143] libmachine: unable to find current IP address of domain addons-703051 in network mk-addons-703051 (interfaces detected: [])
I1216 02:26:08.283685 9897 retry.go:31] will retry after 823.417805ms: waiting for domain to come up
I1216 02:26:09.109082 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:09.109664 9897 main.go:143] libmachine: no network interface addresses found for domain addons-703051 (source=lease)
I1216 02:26:09.109680 9897 main.go:143] libmachine: trying to list again with source=arp
I1216 02:26:09.109955 9897 main.go:143] libmachine: unable to find current IP address of domain addons-703051 in network mk-addons-703051 (interfaces detected: [])
I1216 02:26:09.109988 9897 retry.go:31] will retry after 1.344643175s: waiting for domain to come up
I1216 02:26:10.456175 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:10.456703 9897 main.go:143] libmachine: no network interface addresses found for domain addons-703051 (source=lease)
I1216 02:26:10.456721 9897 main.go:143] libmachine: trying to list again with source=arp
I1216 02:26:10.457007 9897 main.go:143] libmachine: unable to find current IP address of domain addons-703051 in network mk-addons-703051 (interfaces detected: [])
I1216 02:26:10.457042 9897 retry.go:31] will retry after 1.518653081s: waiting for domain to come up
I1216 02:26:11.976717 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:11.977252 9897 main.go:143] libmachine: no network interface addresses found for domain addons-703051 (source=lease)
I1216 02:26:11.977276 9897 main.go:143] libmachine: trying to list again with source=arp
I1216 02:26:11.977562 9897 main.go:143] libmachine: unable to find current IP address of domain addons-703051 in network mk-addons-703051 (interfaces detected: [])
I1216 02:26:11.977592 9897 retry.go:31] will retry after 1.82369489s: waiting for domain to come up
I1216 02:26:13.803556 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:13.804131 9897 main.go:143] libmachine: no network interface addresses found for domain addons-703051 (source=lease)
I1216 02:26:13.804153 9897 main.go:143] libmachine: trying to list again with source=arp
I1216 02:26:13.804484 9897 main.go:143] libmachine: unable to find current IP address of domain addons-703051 in network mk-addons-703051 (interfaces detected: [])
I1216 02:26:13.804524 9897 retry.go:31] will retry after 2.904064752s: waiting for domain to come up
I1216 02:26:16.712141 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:16.712715 9897 main.go:143] libmachine: no network interface addresses found for domain addons-703051 (source=lease)
I1216 02:26:16.712735 9897 main.go:143] libmachine: trying to list again with source=arp
I1216 02:26:16.713013 9897 main.go:143] libmachine: unable to find current IP address of domain addons-703051 in network mk-addons-703051 (interfaces detected: [])
I1216 02:26:16.713051 9897 retry.go:31] will retry after 2.942381057s: waiting for domain to come up
I1216 02:26:19.657109 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:19.657655 9897 main.go:143] libmachine: domain addons-703051 has current primary IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:19.657669 9897 main.go:143] libmachine: found domain IP: 192.168.39.237
I1216 02:26:19.657676 9897 main.go:143] libmachine: reserving static IP address...
I1216 02:26:19.658021 9897 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-703051", mac: "52:54:00:7a:59:00", ip: "192.168.39.237"} in network mk-addons-703051
I1216 02:26:19.844126 9897 main.go:143] libmachine: reserved static IP address 192.168.39.237 for domain addons-703051
I1216 02:26:19.844149 9897 main.go:143] libmachine: waiting for SSH...
I1216 02:26:19.844158 9897 main.go:143] libmachine: Getting to WaitForSSH function...
I1216 02:26:19.847118 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:19.847656 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7a:59:00}
I1216 02:26:19.847688 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:19.847914 9897 main.go:143] libmachine: Using SSH client type: native
I1216 02:26:19.848168 9897 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.237 22 <nil> <nil>}
I1216 02:26:19.848180 9897 main.go:143] libmachine: About to run SSH command:
exit 0
I1216 02:26:19.961667 9897 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1216 02:26:19.962084 9897 main.go:143] libmachine: domain creation complete
I1216 02:26:19.963552 9897 machine.go:94] provisionDockerMachine start ...
I1216 02:26:19.965472 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:19.965813 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:19.965838 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:19.965998 9897 main.go:143] libmachine: Using SSH client type: native
I1216 02:26:19.966189 9897 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.237 22 <nil> <nil>}
I1216 02:26:19.966199 9897 main.go:143] libmachine: About to run SSH command:
hostname
I1216 02:26:20.071281 9897 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
I1216 02:26:20.071327 9897 buildroot.go:166] provisioning hostname "addons-703051"
I1216 02:26:20.074100 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:20.074466 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:20.074489 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:20.074640 9897 main.go:143] libmachine: Using SSH client type: native
I1216 02:26:20.074826 9897 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.237 22 <nil> <nil>}
I1216 02:26:20.074837 9897 main.go:143] libmachine: About to run SSH command:
sudo hostname addons-703051 && echo "addons-703051" | sudo tee /etc/hostname
I1216 02:26:20.194474 9897 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-703051
I1216 02:26:20.197735 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:20.198231 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:20.198261 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:20.198410 9897 main.go:143] libmachine: Using SSH client type: native
I1216 02:26:20.198639 9897 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.237 22 <nil> <nil>}
I1216 02:26:20.198656 9897 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-703051' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-703051/g' /etc/hosts;
else
echo '127.0.1.1 addons-703051' | sudo tee -a /etc/hosts;
fi
fi
I1216 02:26:20.314865 9897 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1216 02:26:20.314895 9897 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22158-5036/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-5036/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-5036/.minikube}
I1216 02:26:20.314912 9897 buildroot.go:174] setting up certificates
I1216 02:26:20.314948 9897 provision.go:84] configureAuth start
I1216 02:26:20.317907 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:20.318277 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:20.318298 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:20.320859 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:20.321199 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:20.321223 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:20.321353 9897 provision.go:143] copyHostCerts
I1216 02:26:20.321420 9897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-5036/.minikube/ca.pem (1078 bytes)
I1216 02:26:20.321534 9897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-5036/.minikube/cert.pem (1123 bytes)
I1216 02:26:20.321606 9897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-5036/.minikube/key.pem (1679 bytes)
I1216 02:26:20.321676 9897 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-5036/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca-key.pem org=jenkins.addons-703051 san=[127.0.0.1 192.168.39.237 addons-703051 localhost minikube]
I1216 02:26:20.531072 9897 provision.go:177] copyRemoteCerts
I1216 02:26:20.531126 9897 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1216 02:26:20.533743 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:20.534076 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:20.534096 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:20.534216 9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
I1216 02:26:20.618881 9897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1216 02:26:20.647139 9897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1216 02:26:20.676061 9897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1216 02:26:20.702870 9897 provision.go:87] duration metric: took 387.893907ms to configureAuth
I1216 02:26:20.702895 9897 buildroot.go:189] setting minikube options for container-runtime
I1216 02:26:20.703106 9897 config.go:182] Loaded profile config "addons-703051": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 02:26:20.705799 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:20.706122 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:20.706144 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:20.706350 9897 main.go:143] libmachine: Using SSH client type: native
I1216 02:26:20.706536 9897 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.237 22 <nil> <nil>}
I1216 02:26:20.706550 9897 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1216 02:26:20.957897 9897 main.go:143] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1216 02:26:20.957920 9897 machine.go:97] duration metric: took 994.353618ms to provisionDockerMachine
I1216 02:26:20.957953 9897 client.go:176] duration metric: took 17.164630287s to LocalClient.Create
I1216 02:26:20.957973 9897 start.go:167] duration metric: took 17.164685125s to libmachine.API.Create "addons-703051"
I1216 02:26:20.957981 9897 start.go:293] postStartSetup for "addons-703051" (driver="kvm2")
I1216 02:26:20.957991 9897 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1216 02:26:20.958044 9897 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1216 02:26:20.960985 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:20.961434 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:20.961470 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:20.961633 9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
I1216 02:26:21.045310 9897 ssh_runner.go:195] Run: cat /etc/os-release
I1216 02:26:21.050502 9897 info.go:137] Remote host: Buildroot 2025.02
I1216 02:26:21.050534 9897 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5036/.minikube/addons for local assets ...
I1216 02:26:21.050638 9897 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5036/.minikube/files for local assets ...
I1216 02:26:21.050683 9897 start.go:296] duration metric: took 92.694156ms for postStartSetup
I1216 02:26:21.053986 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:21.054412 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:21.054441 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:21.054674 9897 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/config.json ...
I1216 02:26:21.054880 9897 start.go:128] duration metric: took 17.263103723s to createHost
I1216 02:26:21.057022 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:21.057289 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:21.057306 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:21.057502 9897 main.go:143] libmachine: Using SSH client type: native
I1216 02:26:21.057730 9897 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.237 22 <nil> <nil>}
I1216 02:26:21.057742 9897 main.go:143] libmachine: About to run SSH command:
date +%s.%N
I1216 02:26:21.165164 9897 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765851981.127109611
I1216 02:26:21.165186 9897 fix.go:216] guest clock: 1765851981.127109611
I1216 02:26:21.165194 9897 fix.go:229] Guest: 2025-12-16 02:26:21.127109611 +0000 UTC Remote: 2025-12-16 02:26:21.05489083 +0000 UTC m=+17.351991699 (delta=72.218781ms)
I1216 02:26:21.165207 9897 fix.go:200] guest clock delta is within tolerance: 72.218781ms
I1216 02:26:21.165211 9897 start.go:83] releasing machines lock for "addons-703051", held for 17.373492806s
I1216 02:26:21.168286 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:21.168696 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:21.168715 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:21.169192 9897 ssh_runner.go:195] Run: cat /version.json
I1216 02:26:21.169260 9897 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1216 02:26:21.172456 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:21.172533 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:21.172858 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:21.172893 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:21.172914 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:21.172957 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:21.173079 9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
I1216 02:26:21.173250 9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
I1216 02:26:21.250522 9897 ssh_runner.go:195] Run: systemctl --version
I1216 02:26:21.287625 9897 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1216 02:26:21.448874 9897 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1216 02:26:21.455946 9897 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1216 02:26:21.456010 9897 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1216 02:26:21.475123 9897 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1216 02:26:21.475158 9897 start.go:496] detecting cgroup driver to use...
I1216 02:26:21.475220 9897 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1216 02:26:21.494824 9897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1216 02:26:21.511073 9897 docker.go:218] disabling cri-docker service (if available) ...
I1216 02:26:21.511148 9897 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1216 02:26:21.528083 9897 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1216 02:26:21.543455 9897 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1216 02:26:21.682422 9897 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1216 02:26:21.901202 9897 docker.go:234] disabling docker service ...
I1216 02:26:21.901273 9897 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1216 02:26:21.917132 9897 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1216 02:26:21.931716 9897 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1216 02:26:22.084617 9897 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1216 02:26:22.223713 9897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1216 02:26:22.239287 9897 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1216 02:26:22.260906 9897 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1216 02:26:22.261002 9897 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1216 02:26:22.272997 9897 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1216 02:26:22.273056 9897 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1216 02:26:22.284720 9897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1216 02:26:22.296263 9897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1216 02:26:22.307583 9897 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1216 02:26:22.319960 9897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1216 02:26:22.331123 9897 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1216 02:26:22.350422 9897 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1216 02:26:22.362126 9897 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1216 02:26:22.372344 9897 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1216 02:26:22.372409 9897 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1216 02:26:22.393248 9897 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1216 02:26:22.406245 9897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1216 02:26:22.539378 9897 ssh_runner.go:195] Run: sudo systemctl restart crio
I1216 02:26:22.646401 9897 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1216 02:26:22.646481 9897 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1216 02:26:22.651542 9897 start.go:564] Will wait 60s for crictl version
I1216 02:26:22.651614 9897 ssh_runner.go:195] Run: which crictl
I1216 02:26:22.655344 9897 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1216 02:26:22.689871 9897 start.go:580] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1216 02:26:22.690025 9897 ssh_runner.go:195] Run: crio --version
I1216 02:26:22.718442 9897 ssh_runner.go:195] Run: crio --version
I1216 02:26:22.747121 9897 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
I1216 02:26:22.751219 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:22.751576 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:22.751605 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:22.751813 9897 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1216 02:26:22.756270 9897 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1216 02:26:22.771078 9897 kubeadm.go:884] updating cluster {Name:addons-703051 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-703051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1216 02:26:22.771201 9897 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1216 02:26:22.771255 9897 ssh_runner.go:195] Run: sudo crictl images --output json
I1216 02:26:22.800114 9897 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
I1216 02:26:22.800188 9897 ssh_runner.go:195] Run: which lz4
I1216 02:26:22.804337 9897 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1216 02:26:22.808796 9897 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1216 02:26:22.808834 9897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
I1216 02:26:23.925489 9897 crio.go:462] duration metric: took 1.121189179s to copy over tarball
I1216 02:26:23.925553 9897 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1216 02:26:25.260659 9897 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.335077671s)
I1216 02:26:25.260684 9897 crio.go:469] duration metric: took 1.335169907s to extract the tarball
I1216 02:26:25.260691 9897 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1216 02:26:25.297960 9897 ssh_runner.go:195] Run: sudo crictl images --output json
I1216 02:26:25.335867 9897 crio.go:514] all images are preloaded for cri-o runtime.
I1216 02:26:25.335887 9897 cache_images.go:86] Images are preloaded, skipping loading
I1216 02:26:25.335893 9897 kubeadm.go:935] updating node { 192.168.39.237 8443 v1.34.2 crio true true} ...
I1216 02:26:25.335990 9897 kubeadm.go:947] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-703051 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.237
[Install]
config:
{KubernetesVersion:v1.34.2 ClusterName:addons-703051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1216 02:26:25.336052 9897 ssh_runner.go:195] Run: crio config
I1216 02:26:25.378819 9897 cni.go:84] Creating CNI manager for ""
I1216 02:26:25.378839 9897 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1216 02:26:25.378853 9897 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1216 02:26:25.378878 9897 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.237 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-703051 NodeName:addons-703051 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.237 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1216 02:26:25.379041 9897 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.237
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-703051"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.237"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.237"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1216 02:26:25.379103 9897 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
I1216 02:26:25.391406 9897 binaries.go:51] Found k8s binaries, skipping transfer
I1216 02:26:25.391473 9897 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1216 02:26:25.403771 9897 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
I1216 02:26:25.424115 9897 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1216 02:26:25.444698 9897 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
I1216 02:26:25.464297 9897 ssh_runner.go:195] Run: grep 192.168.39.237 control-plane.minikube.internal$ /etc/hosts
I1216 02:26:25.468231 9897 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.237 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1216 02:26:25.482512 9897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1216 02:26:25.622779 9897 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1216 02:26:25.653693 9897 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051 for IP: 192.168.39.237
I1216 02:26:25.653718 9897 certs.go:195] generating shared ca certs ...
I1216 02:26:25.653733 9897 certs.go:227] acquiring lock for ca certs: {Name:mk77e952ddad6d1f2b7d1d07b6d50cdef35b56ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 02:26:25.653873 9897 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-5036/.minikube/ca.key
I1216 02:26:25.699828 9897 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt ...
I1216 02:26:25.699859 9897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt: {Name:mk96cbe67fb452e3df3335485db75f2b8d2e1ce5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 02:26:25.700033 9897 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5036/.minikube/ca.key ...
I1216 02:26:25.700045 9897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/ca.key: {Name:mk50341eeb18c15b6a2b99322b38074283292ab3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 02:26:25.700115 9897 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-5036/.minikube/proxy-client-ca.key
I1216 02:26:25.754939 9897 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5036/.minikube/proxy-client-ca.crt ...
I1216 02:26:25.754964 9897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/proxy-client-ca.crt: {Name:mk92d577e4f40a75e029f362bc1e4f62e633c62b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 02:26:25.755109 9897 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5036/.minikube/proxy-client-ca.key ...
I1216 02:26:25.755129 9897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/proxy-client-ca.key: {Name:mk3216f3bfbcf2ff0102997b68be97acb112f4c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 02:26:25.755216 9897 certs.go:257] generating profile certs ...
I1216 02:26:25.755276 9897 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.key
I1216 02:26:25.755295 9897 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt with IP's: []
I1216 02:26:25.780115 9897 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt ...
I1216 02:26:25.780133 9897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: {Name:mk73766b0106d430ea9ac5c15a4dda9ff5c3e32c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 02:26:25.780258 9897 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.key ...
I1216 02:26:25.780268 9897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.key: {Name:mkcd34d963e04bce13bf159c3cf006123bf5dbe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 02:26:25.780330 9897 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/apiserver.key.110bcffd
I1216 02:26:25.780346 9897 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/apiserver.crt.110bcffd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.237]
I1216 02:26:25.882145 9897 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/apiserver.crt.110bcffd ...
I1216 02:26:25.882173 9897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/apiserver.crt.110bcffd: {Name:mkd13ac21a13491c25f23352f1398d3ad162c18b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 02:26:25.882315 9897 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/apiserver.key.110bcffd ...
I1216 02:26:25.882327 9897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/apiserver.key.110bcffd: {Name:mk4dfddb5bd59db476243c45983ffa412c6ec82d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 02:26:25.882397 9897 certs.go:382] copying /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/apiserver.crt.110bcffd -> /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/apiserver.crt
I1216 02:26:25.882465 9897 certs.go:386] copying /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/apiserver.key.110bcffd -> /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/apiserver.key
I1216 02:26:25.882506 9897 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/proxy-client.key
I1216 02:26:25.882524 9897 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/proxy-client.crt with IP's: []
I1216 02:26:25.932572 9897 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/proxy-client.crt ...
I1216 02:26:25.932595 9897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/proxy-client.crt: {Name:mk972001f2d21f7f6944ec53f3ff7c468aa275cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 02:26:25.932726 9897 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/proxy-client.key ...
I1216 02:26:25.932737 9897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/proxy-client.key: {Name:mkcf2a9bfe0760b09e02de6da2cbf550e9448e5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 02:26:25.932891 9897 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca-key.pem (1679 bytes)
I1216 02:26:25.932936 9897 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem (1078 bytes)
I1216 02:26:25.932964 9897 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/cert.pem (1123 bytes)
I1216 02:26:25.932985 9897 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/key.pem (1679 bytes)
I1216 02:26:25.933481 9897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1216 02:26:25.962794 9897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1216 02:26:25.991606 9897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1216 02:26:26.019154 9897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1216 02:26:26.045872 9897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1216 02:26:26.073577 9897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1216 02:26:26.100306 9897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1216 02:26:26.128965 9897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1216 02:26:26.156176 9897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1216 02:26:26.182603 9897 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1216 02:26:26.201153 9897 ssh_runner.go:195] Run: openssl version
I1216 02:26:26.207055 9897 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1216 02:26:26.217778 9897 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1216 02:26:26.228473 9897 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1216 02:26:26.233042 9897 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:26 /usr/share/ca-certificates/minikubeCA.pem
I1216 02:26:26.233086 9897 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1216 02:26:26.240675 9897 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1216 02:26:26.252375 9897 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1216 02:26:26.263538 9897 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1216 02:26:26.268010 9897 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1216 02:26:26.268070 9897 kubeadm.go:401] StartCluster: {Name:addons-703051 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-703051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1216 02:26:26.268159 9897 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1216 02:26:26.268213 9897 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1216 02:26:26.300826 9897 cri.go:89] found id: ""
I1216 02:26:26.300880 9897 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1216 02:26:26.314662 9897 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1216 02:26:26.326632 9897 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1216 02:26:26.341701 9897 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1216 02:26:26.341718 9897 kubeadm.go:158] found existing configuration files:
I1216 02:26:26.341753 9897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1216 02:26:26.355181 9897 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1216 02:26:26.355230 9897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1216 02:26:26.369476 9897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1216 02:26:26.379830 9897 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1216 02:26:26.379883 9897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1216 02:26:26.390614 9897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1216 02:26:26.400363 9897 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1216 02:26:26.400416 9897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1216 02:26:26.411747 9897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1216 02:26:26.421789 9897 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1216 02:26:26.421842 9897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1216 02:26:26.432321 9897 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1216 02:26:26.576766 9897 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1216 02:26:39.439411 9897 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
I1216 02:26:39.439481 9897 kubeadm.go:319] [preflight] Running pre-flight checks
I1216 02:26:39.439568 9897 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1216 02:26:39.439676 9897 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1216 02:26:39.439792 9897 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1216 02:26:39.439886 9897 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1216 02:26:39.441191 9897 out.go:252] - Generating certificates and keys ...
I1216 02:26:39.441285 9897 kubeadm.go:319] [certs] Using existing ca certificate authority
I1216 02:26:39.441364 9897 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1216 02:26:39.441452 9897 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1216 02:26:39.441543 9897 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1216 02:26:39.441612 9897 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1216 02:26:39.441693 9897 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1216 02:26:39.441783 9897 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1216 02:26:39.441920 9897 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-703051 localhost] and IPs [192.168.39.237 127.0.0.1 ::1]
I1216 02:26:39.442007 9897 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1216 02:26:39.442176 9897 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-703051 localhost] and IPs [192.168.39.237 127.0.0.1 ::1]
I1216 02:26:39.442287 9897 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1216 02:26:39.442375 9897 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1216 02:26:39.442412 9897 kubeadm.go:319] [certs] Generating "sa" key and public key
I1216 02:26:39.442457 9897 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1216 02:26:39.442497 9897 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1216 02:26:39.442557 9897 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1216 02:26:39.442618 9897 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1216 02:26:39.442684 9897 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1216 02:26:39.442752 9897 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1216 02:26:39.442888 9897 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1216 02:26:39.442978 9897 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1216 02:26:39.445052 9897 out.go:252] - Booting up control plane ...
I1216 02:26:39.445157 9897 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1216 02:26:39.445236 9897 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1216 02:26:39.445293 9897 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1216 02:26:39.445427 9897 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1216 02:26:39.445555 9897 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1216 02:26:39.445688 9897 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1216 02:26:39.445773 9897 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1216 02:26:39.445806 9897 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1216 02:26:39.445969 9897 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1216 02:26:39.446107 9897 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1216 02:26:39.446486 9897 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501620119s
I1216 02:26:39.446599 9897 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1216 02:26:39.446705 9897 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.237:8443/livez
I1216 02:26:39.446800 9897 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1216 02:26:39.446888 9897 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1216 02:26:39.447001 9897 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.119131436s
I1216 02:26:39.447105 9897 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.570120988s
I1216 02:26:39.447201 9897 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501413189s
I1216 02:26:39.447380 9897 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1216 02:26:39.447548 9897 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1216 02:26:39.447649 9897 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1216 02:26:39.447836 9897 kubeadm.go:319] [mark-control-plane] Marking the node addons-703051 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1216 02:26:39.447914 9897 kubeadm.go:319] [bootstrap-token] Using token: uz28uy.dsgpl4o4zuxnmuzz
I1216 02:26:39.449354 9897 out.go:252] - Configuring RBAC rules ...
I1216 02:26:39.449495 9897 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1216 02:26:39.449600 9897 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1216 02:26:39.449764 9897 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1216 02:26:39.449936 9897 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1216 02:26:39.450036 9897 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1216 02:26:39.450107 9897 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1216 02:26:39.450212 9897 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1216 02:26:39.450250 9897 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1216 02:26:39.450304 9897 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1216 02:26:39.450312 9897 kubeadm.go:319]
I1216 02:26:39.450366 9897 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1216 02:26:39.450372 9897 kubeadm.go:319]
I1216 02:26:39.450458 9897 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1216 02:26:39.450464 9897 kubeadm.go:319]
I1216 02:26:39.450484 9897 kubeadm.go:319] mkdir -p $HOME/.kube
I1216 02:26:39.450580 9897 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1216 02:26:39.450666 9897 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1216 02:26:39.450675 9897 kubeadm.go:319]
I1216 02:26:39.450751 9897 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1216 02:26:39.450765 9897 kubeadm.go:319]
I1216 02:26:39.450841 9897 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1216 02:26:39.450854 9897 kubeadm.go:319]
I1216 02:26:39.450947 9897 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1216 02:26:39.451049 9897 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1216 02:26:39.451144 9897 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1216 02:26:39.451152 9897 kubeadm.go:319]
I1216 02:26:39.451251 9897 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1216 02:26:39.451350 9897 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1216 02:26:39.451359 9897 kubeadm.go:319]
I1216 02:26:39.451463 9897 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token uz28uy.dsgpl4o4zuxnmuzz \
I1216 02:26:39.451593 9897 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:6d3bac17af9f836812b78bb65fe3149db071d191150485ad31b907e98cbc14f1 \
I1216 02:26:39.451623 9897 kubeadm.go:319] --control-plane
I1216 02:26:39.451632 9897 kubeadm.go:319]
I1216 02:26:39.451730 9897 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1216 02:26:39.451742 9897 kubeadm.go:319]
I1216 02:26:39.451852 9897 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token uz28uy.dsgpl4o4zuxnmuzz \
I1216 02:26:39.452047 9897 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:6d3bac17af9f836812b78bb65fe3149db071d191150485ad31b907e98cbc14f1
I1216 02:26:39.452060 9897 cni.go:84] Creating CNI manager for ""
I1216 02:26:39.452066 9897 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1216 02:26:39.453397 9897 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1216 02:26:39.454508 9897 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1216 02:26:39.471915 9897 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1216 02:26:39.497529 9897 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1216 02:26:39.497668 9897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 02:26:39.497670 9897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-703051 minikube.k8s.io/updated_at=2025_12_16T02_26_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=addons-703051 minikube.k8s.io/primary=true
I1216 02:26:39.543664 9897 ops.go:34] apiserver oom_adj: -16
I1216 02:26:39.616196 9897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 02:26:40.117125 9897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 02:26:40.616670 9897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 02:26:41.117092 9897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 02:26:41.616319 9897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 02:26:42.116330 9897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 02:26:42.616795 9897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 02:26:43.116301 9897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 02:26:43.616287 9897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 02:26:43.690518 9897 kubeadm.go:1114] duration metric: took 4.192920237s to wait for elevateKubeSystemPrivileges
I1216 02:26:43.690568 9897 kubeadm.go:403] duration metric: took 17.422501356s to StartCluster
I1216 02:26:43.690591 9897 settings.go:142] acquiring lock: {Name:mk546ecdfe1860ae68a814905b53e6453298b4fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 02:26:43.690738 9897 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22158-5036/kubeconfig
I1216 02:26:43.691209 9897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/kubeconfig: {Name:mk6832d71ef0ad581fa898dceefc2fcc2fd665b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 02:26:43.691425 9897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1216 02:26:43.691456 9897 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1216 02:26:43.691630 9897 config.go:182] Loaded profile config "addons-703051": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 02:26:43.691578 9897 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1216 02:26:43.691751 9897 addons.go:70] Setting gcp-auth=true in profile "addons-703051"
I1216 02:26:43.691768 9897 addons.go:70] Setting yakd=true in profile "addons-703051"
I1216 02:26:43.691778 9897 addons.go:70] Setting ingress=true in profile "addons-703051"
I1216 02:26:43.691789 9897 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-703051"
I1216 02:26:43.691795 9897 addons.go:239] Setting addon ingress=true in "addons-703051"
I1216 02:26:43.691800 9897 addons.go:239] Setting addon yakd=true in "addons-703051"
I1216 02:26:43.691814 9897 addons.go:70] Setting cloud-spanner=true in profile "addons-703051"
I1216 02:26:43.691831 9897 host.go:66] Checking if "addons-703051" exists ...
I1216 02:26:43.691825 9897 addons.go:70] Setting storage-provisioner=true in profile "addons-703051"
I1216 02:26:43.691840 9897 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-703051"
I1216 02:26:43.691842 9897 addons.go:239] Setting addon cloud-spanner=true in "addons-703051"
I1216 02:26:43.691848 9897 addons.go:239] Setting addon storage-provisioner=true in "addons-703051"
I1216 02:26:43.691852 9897 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-703051"
I1216 02:26:43.691856 9897 addons.go:70] Setting volumesnapshots=true in profile "addons-703051"
I1216 02:26:43.691869 9897 host.go:66] Checking if "addons-703051" exists ...
I1216 02:26:43.691874 9897 addons.go:239] Setting addon volumesnapshots=true in "addons-703051"
I1216 02:26:43.691877 9897 host.go:66] Checking if "addons-703051" exists ...
I1216 02:26:43.691880 9897 addons.go:70] Setting registry=true in profile "addons-703051"
I1216 02:26:43.691882 9897 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-703051"
I1216 02:26:43.691890 9897 addons.go:70] Setting ingress-dns=true in profile "addons-703051"
I1216 02:26:43.691903 9897 addons.go:70] Setting default-storageclass=true in profile "addons-703051"
I1216 02:26:43.691904 9897 addons.go:70] Setting metrics-server=true in profile "addons-703051"
I1216 02:26:43.691909 9897 host.go:66] Checking if "addons-703051" exists ...
I1216 02:26:43.691780 9897 mustload.go:66] Loading cluster: addons-703051
I1216 02:26:43.691917 9897 addons.go:239] Setting addon metrics-server=true in "addons-703051"
I1216 02:26:43.691918 9897 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-703051"
I1216 02:26:43.691952 9897 host.go:66] Checking if "addons-703051" exists ...
I1216 02:26:43.692091 9897 config.go:182] Loaded profile config "addons-703051": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 02:26:43.691840 9897 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-703051"
I1216 02:26:43.692420 9897 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-703051"
I1216 02:26:43.691865 9897 addons.go:70] Setting registry-creds=true in profile "addons-703051"
I1216 02:26:43.692725 9897 addons.go:239] Setting addon registry-creds=true in "addons-703051"
I1216 02:26:43.691832 9897 host.go:66] Checking if "addons-703051" exists ...
I1216 02:26:43.691882 9897 addons.go:70] Setting inspektor-gadget=true in profile "addons-703051"
I1216 02:26:43.692791 9897 addons.go:239] Setting addon inspektor-gadget=true in "addons-703051"
I1216 02:26:43.692813 9897 host.go:66] Checking if "addons-703051" exists ...
I1216 02:26:43.691893 9897 host.go:66] Checking if "addons-703051" exists ...
I1216 02:26:43.691849 9897 addons.go:70] Setting volcano=true in profile "addons-703051"
I1216 02:26:43.693074 9897 addons.go:239] Setting addon volcano=true in "addons-703051"
I1216 02:26:43.693102 9897 host.go:66] Checking if "addons-703051" exists ...
I1216 02:26:43.691871 9897 host.go:66] Checking if "addons-703051" exists ...
I1216 02:26:43.693408 9897 out.go:179] * Verifying Kubernetes components...
I1216 02:26:43.692750 9897 host.go:66] Checking if "addons-703051" exists ...
I1216 02:26:43.691894 9897 addons.go:239] Setting addon registry=true in "addons-703051"
I1216 02:26:43.693607 9897 host.go:66] Checking if "addons-703051" exists ...
I1216 02:26:43.691907 9897 addons.go:239] Setting addon ingress-dns=true in "addons-703051"
I1216 02:26:43.693680 9897 host.go:66] Checking if "addons-703051" exists ...
I1216 02:26:43.691769 9897 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-703051"
I1216 02:26:43.693711 9897 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-703051"
I1216 02:26:43.693730 9897 host.go:66] Checking if "addons-703051" exists ...
I1216 02:26:43.694968 9897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1216 02:26:43.697700 9897 host.go:66] Checking if "addons-703051" exists ...
I1216 02:26:43.699054 9897 addons.go:239] Setting addon default-storageclass=true in "addons-703051"
I1216 02:26:43.699084 9897 host.go:66] Checking if "addons-703051" exists ...
I1216 02:26:43.700025 9897 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1216 02:26:43.700052 9897 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1216 02:26:43.700091 9897 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1216 02:26:43.700750 9897 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1216 02:26:43.701123 9897 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-703051"
I1216 02:26:43.700762 9897 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
I1216 02:26:43.701152 9897 host.go:66] Checking if "addons-703051" exists ...
W1216 02:26:43.701896 9897 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1216 02:26:43.702018 9897 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.5
I1216 02:26:43.702043 9897 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1216 02:26:43.702457 9897 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1216 02:26:43.702052 9897 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1216 02:26:43.702577 9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1216 02:26:43.702772 9897 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1216 02:26:43.702779 9897 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1216 02:26:43.702789 9897 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1216 02:26:43.702802 9897 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
I1216 02:26:43.702827 9897 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
I1216 02:26:43.702847 9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1216 02:26:43.703717 9897 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
I1216 02:26:43.703733 9897 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1216 02:26:43.703742 9897 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1216 02:26:43.704005 9897 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1216 02:26:43.704232 9897 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1216 02:26:43.707366 9897 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
I1216 02:26:43.707377 9897 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
I1216 02:26:43.707379 9897 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1216 02:26:43.707390 9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
I1216 02:26:43.707403 9897 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1216 02:26:43.707381 9897 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
I1216 02:26:43.707530 9897 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1216 02:26:43.707538 9897 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1216 02:26:43.708207 9897 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1216 02:26:43.708214 9897 out.go:179] - Using image docker.io/busybox:stable
I1216 02:26:43.708650 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.708793 9897 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1216 02:26:43.708807 9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1216 02:26:43.708821 9897 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1216 02:26:43.708831 9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1216 02:26:43.708793 9897 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1216 02:26:43.708872 9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1216 02:26:43.709053 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.709408 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.709569 9897 out.go:179] - Using image docker.io/registry:3.0.0
I1216 02:26:43.709576 9897 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
I1216 02:26:43.709648 9897 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1216 02:26:43.709662 9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1216 02:26:43.710221 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:43.710250 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.710341 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.710435 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:43.710470 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.710689 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:43.710718 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.710767 9897 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1216 02:26:43.710811 9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
I1216 02:26:43.710866 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.711039 9897 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1216 02:26:43.711097 9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1216 02:26:43.711248 9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
I1216 02:26:43.711331 9897 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
I1216 02:26:43.711344 9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1216 02:26:43.711619 9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
I1216 02:26:43.711650 9897 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1216 02:26:43.712008 9897 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1216 02:26:43.712030 9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1216 02:26:43.712355 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:43.712388 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.712585 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:43.712614 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.713120 9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
I1216 02:26:43.713193 9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
I1216 02:26:43.713732 9897 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1216 02:26:43.714975 9897 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1216 02:26:43.716003 9897 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1216 02:26:43.717016 9897 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1216 02:26:43.717158 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.717899 9897 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1216 02:26:43.717953 9897 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1216 02:26:43.718053 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.718182 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:43.718213 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.718446 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.718864 9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
I1216 02:26:43.719070 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.719177 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.719652 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:43.719685 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.719905 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:43.719955 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.719982 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.720109 9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
I1216 02:26:43.720373 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:43.720445 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.720401 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:43.720531 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.720573 9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
I1216 02:26:43.720797 9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
I1216 02:26:43.721150 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.721227 9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
I1216 02:26:43.721296 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:43.721322 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.721329 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.721560 9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
I1216 02:26:43.721852 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.721912 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:43.721956 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.722303 9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
I1216 02:26:43.722509 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:43.722545 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.722595 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:43.722641 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.722705 9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
I1216 02:26:43.722915 9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
I1216 02:26:43.723949 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.724446 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:43.724478 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:43.724644 9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
W1216 02:26:43.917697 9897 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54634->192.168.39.237:22: read: connection reset by peer
I1216 02:26:43.917730 9897 retry.go:31] will retry after 260.18533ms: ssh: handshake failed: read tcp 192.168.39.1:54634->192.168.39.237:22: read: connection reset by peer
W1216 02:26:43.946111 9897 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54652->192.168.39.237:22: read: connection reset by peer
I1216 02:26:43.946141 9897 retry.go:31] will retry after 290.504819ms: ssh: handshake failed: read tcp 192.168.39.1:54652->192.168.39.237:22: read: connection reset by peer
I1216 02:26:44.022122 9897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1216 02:26:44.054036 9897 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1216 02:26:44.299895 9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1216 02:26:44.300699 9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
I1216 02:26:44.334354 9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1216 02:26:44.380754 9897 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
I1216 02:26:44.380780 9897 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1216 02:26:44.392050 9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1216 02:26:44.431041 9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1216 02:26:44.484521 9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1216 02:26:44.523606 9897 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1216 02:26:44.523626 9897 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1216 02:26:44.539740 9897 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1216 02:26:44.539759 9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1216 02:26:44.544336 9897 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1216 02:26:44.544359 9897 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1216 02:26:44.549857 9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1216 02:26:44.559534 9897 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
I1216 02:26:44.559552 9897 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1216 02:26:44.560772 9897 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
I1216 02:26:44.560792 9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1216 02:26:44.584689 9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1216 02:26:44.879456 9897 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1216 02:26:44.879486 9897 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1216 02:26:44.962072 9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1216 02:26:45.026319 9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1216 02:26:45.042389 9897 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1216 02:26:45.042417 9897 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1216 02:26:45.056839 9897 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1216 02:26:45.056873 9897 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1216 02:26:45.130086 9897 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
I1216 02:26:45.130110 9897 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1216 02:26:45.260473 9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1216 02:26:45.385715 9897 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1216 02:26:45.385772 9897 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1216 02:26:45.420181 9897 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1216 02:26:45.420212 9897 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1216 02:26:45.574777 9897 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1216 02:26:45.574811 9897 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1216 02:26:45.624452 9897 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
I1216 02:26:45.624503 9897 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1216 02:26:45.733011 9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1216 02:26:45.982219 9897 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1216 02:26:45.982247 9897 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1216 02:26:46.167269 9897 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1216 02:26:46.167308 9897 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1216 02:26:46.196824 9897 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
I1216 02:26:46.196853 9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1216 02:26:46.308954 9897 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1216 02:26:46.308983 9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1216 02:26:46.455317 9897 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1216 02:26:46.455351 9897 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1216 02:26:46.535473 9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1216 02:26:46.663469 9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1216 02:26:46.804093 9897 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1216 02:26:46.804121 9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1216 02:26:47.014056 9897 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.991890726s)
I1216 02:26:47.014098 9897 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1216 02:26:47.014127 9897 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.960060673s)
I1216 02:26:47.014768 9897 node_ready.go:35] waiting up to 6m0s for node "addons-703051" to be "Ready" ...
I1216 02:26:47.039570 9897 node_ready.go:49] node "addons-703051" is "Ready"
I1216 02:26:47.039601 9897 node_ready.go:38] duration metric: took 24.815989ms for node "addons-703051" to be "Ready" ...
I1216 02:26:47.039612 9897 api_server.go:52] waiting for apiserver process to appear ...
I1216 02:26:47.039659 9897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1216 02:26:47.153509 9897 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1216 02:26:47.153537 9897 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1216 02:26:47.544722 9897 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-703051" context rescaled to 1 replicas
I1216 02:26:47.685399 9897 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1216 02:26:47.685438 9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1216 02:26:48.041556 9897 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1216 02:26:48.041577 9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1216 02:26:48.291264 9897 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1216 02:26:48.291294 9897 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1216 02:26:48.443449 9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1216 02:26:50.566888 9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (6.266161165s)
I1216 02:26:50.567006 9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.267075182s)
I1216 02:26:50.567102 9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.2327195s)
I1216 02:26:50.567152 9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.175073705s)
I1216 02:26:50.567220 9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.136144795s)
I1216 02:26:50.567251 9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.082702609s)
I1216 02:26:50.567297 9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (6.017402841s)
I1216 02:26:50.567337 9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.982620356s)
W1216 02:26:50.693849 9897 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I1216 02:26:51.190990 9897 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1216 02:26:51.193683 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:51.194106 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:51.194132 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:51.194319 9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
I1216 02:26:51.378931 9897 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1216 02:26:51.456721 9897 addons.go:239] Setting addon gcp-auth=true in "addons-703051"
I1216 02:26:51.456777 9897 host.go:66] Checking if "addons-703051" exists ...
I1216 02:26:51.458873 9897 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1216 02:26:51.461743 9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:51.462191 9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
I1216 02:26:51.462227 9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
I1216 02:26:51.462403 9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
I1216 02:26:52.676571 9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.714459473s)
I1216 02:26:52.676613 9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.650263291s)
I1216 02:26:52.676615 9897 addons.go:495] Verifying addon ingress=true in "addons-703051"
I1216 02:26:52.676626 9897 addons.go:495] Verifying addon registry=true in "addons-703051"
I1216 02:26:52.676775 9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.943733818s)
I1216 02:26:52.676799 9897 addons.go:495] Verifying addon metrics-server=true in "addons-703051"
I1216 02:26:52.676856 9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.141346096s)
I1216 02:26:52.676689 9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.416182696s)
I1216 02:26:52.676963 9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.013450778s)
I1216 02:26:52.677001 9897 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.63732773s)
I1216 02:26:52.677026 9897 api_server.go:72] duration metric: took 8.985541628s to wait for apiserver process to appear ...
I1216 02:26:52.677036 9897 api_server.go:88] waiting for apiserver healthz status ...
I1216 02:26:52.677053 9897 api_server.go:253] Checking apiserver healthz at https://192.168.39.237:8443/healthz ...
W1216 02:26:52.676999 9897 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1216 02:26:52.677161 9897 retry.go:31] will retry after 206.529654ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1216 02:26:52.678652 9897 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-703051 service yakd-dashboard -n yakd-dashboard
I1216 02:26:52.678664 9897 out.go:179] * Verifying registry addon...
I1216 02:26:52.678669 9897 out.go:179] * Verifying ingress addon...
I1216 02:26:52.680761 9897 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1216 02:26:52.681031 9897 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1216 02:26:52.714909 9897 api_server.go:279] https://192.168.39.237:8443/healthz returned 200:
ok
I1216 02:26:52.730295 9897 api_server.go:141] control plane version: v1.34.2
I1216 02:26:52.730330 9897 api_server.go:131] duration metric: took 53.286434ms to wait for apiserver health ...
I1216 02:26:52.730342 9897 system_pods.go:43] waiting for kube-system pods to appear ...
I1216 02:26:52.730342 9897 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1216 02:26:52.730360 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:26:52.730640 9897 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1216 02:26:52.730651 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:26:52.757680 9897 system_pods.go:59] 17 kube-system pods found
I1216 02:26:52.757717 9897 system_pods.go:61] "amd-gpu-device-plugin-4fpsx" [03ef77d5-d326-4953-8e23-ca6c08e8e512] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1216 02:26:52.757728 9897 system_pods.go:61] "coredns-66bc5c9577-4tgqh" [4edb0229-7f11-4e58-90a8-01dc7c8fe069] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1216 02:26:52.757739 9897 system_pods.go:61] "coredns-66bc5c9577-njd54" [fcee9a3a-3aad-44ae-b91c-62813e31b787] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1216 02:26:52.757746 9897 system_pods.go:61] "etcd-addons-703051" [6e78bbfd-5089-4514-8052-9e857a63cf57] Running
I1216 02:26:52.757752 9897 system_pods.go:61] "kube-apiserver-addons-703051" [68ed3f4f-ce1c-4adb-a6e8-86a57e309fb6] Running
I1216 02:26:52.757757 9897 system_pods.go:61] "kube-controller-manager-addons-703051" [83da4a9d-056e-4e75-b40b-030d0a61647f] Running
I1216 02:26:52.757762 9897 system_pods.go:61] "kube-ingress-dns-minikube" [c3e6df2b-852c-4522-9103-54ca55c8c849] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1216 02:26:52.757766 9897 system_pods.go:61] "kube-proxy-mwxm8" [064f9463-1ca7-46b8-8428-a3450e6a50a7] Running
I1216 02:26:52.757802 9897 system_pods.go:61] "kube-scheduler-addons-703051" [0b5f5d06-4a4f-4a55-826b-917b981d723a] Running
I1216 02:26:52.757813 9897 system_pods.go:61] "metrics-server-85b7d694d7-f4xbr" [972a1533-af9a-480f-a4fb-80c6f4653290] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1216 02:26:52.757819 9897 system_pods.go:61] "nvidia-device-plugin-daemonset-dj88n" [aba0db89-f004-4cbb-880e-fda531ad78c4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1216 02:26:52.757826 9897 system_pods.go:61] "registry-6b586f9694-l9ptj" [96cdab4e-1722-4bce-87dc-d0c270e803a6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1216 02:26:52.757832 9897 system_pods.go:61] "registry-creds-764b6fb674-cx22t" [f12a412c-c2cf-4510-8362-2985e7f119b5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1216 02:26:52.757838 9897 system_pods.go:61] "registry-proxy-qx2bk" [ceaffdc5-fb32-4337-a3c4-e6a2a1d6a2b2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1216 02:26:52.757845 9897 system_pods.go:61] "snapshot-controller-7d9fbc56b8-9q7t2" [3bbe3e1d-3ff2-43f5-a29d-006fbdfebbae] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1216 02:26:52.757851 9897 system_pods.go:61] "snapshot-controller-7d9fbc56b8-t2tmt" [b57f6aee-1317-4615-9a50-0c808f07c954] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1216 02:26:52.757859 9897 system_pods.go:61] "storage-provisioner" [2daa6974-1bd3-4976-87ef-c939bb232e93] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1216 02:26:52.757864 9897 system_pods.go:74] duration metric: took 27.515371ms to wait for pod list to return data ...
I1216 02:26:52.757872 9897 default_sa.go:34] waiting for default service account to be created ...
I1216 02:26:52.771524 9897 default_sa.go:45] found service account: "default"
I1216 02:26:52.771543 9897 default_sa.go:55] duration metric: took 13.663789ms for default service account to be created ...
I1216 02:26:52.771551 9897 system_pods.go:116] waiting for k8s-apps to be running ...
I1216 02:26:52.853152 9897 system_pods.go:86] 17 kube-system pods found
I1216 02:26:52.853179 9897 system_pods.go:89] "amd-gpu-device-plugin-4fpsx" [03ef77d5-d326-4953-8e23-ca6c08e8e512] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1216 02:26:52.853187 9897 system_pods.go:89] "coredns-66bc5c9577-4tgqh" [4edb0229-7f11-4e58-90a8-01dc7c8fe069] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1216 02:26:52.853194 9897 system_pods.go:89] "coredns-66bc5c9577-njd54" [fcee9a3a-3aad-44ae-b91c-62813e31b787] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1216 02:26:52.853199 9897 system_pods.go:89] "etcd-addons-703051" [6e78bbfd-5089-4514-8052-9e857a63cf57] Running
I1216 02:26:52.853204 9897 system_pods.go:89] "kube-apiserver-addons-703051" [68ed3f4f-ce1c-4adb-a6e8-86a57e309fb6] Running
I1216 02:26:52.853207 9897 system_pods.go:89] "kube-controller-manager-addons-703051" [83da4a9d-056e-4e75-b40b-030d0a61647f] Running
I1216 02:26:52.853212 9897 system_pods.go:89] "kube-ingress-dns-minikube" [c3e6df2b-852c-4522-9103-54ca55c8c849] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1216 02:26:52.853217 9897 system_pods.go:89] "kube-proxy-mwxm8" [064f9463-1ca7-46b8-8428-a3450e6a50a7] Running
I1216 02:26:52.853221 9897 system_pods.go:89] "kube-scheduler-addons-703051" [0b5f5d06-4a4f-4a55-826b-917b981d723a] Running
I1216 02:26:52.853226 9897 system_pods.go:89] "metrics-server-85b7d694d7-f4xbr" [972a1533-af9a-480f-a4fb-80c6f4653290] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1216 02:26:52.853235 9897 system_pods.go:89] "nvidia-device-plugin-daemonset-dj88n" [aba0db89-f004-4cbb-880e-fda531ad78c4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1216 02:26:52.853241 9897 system_pods.go:89] "registry-6b586f9694-l9ptj" [96cdab4e-1722-4bce-87dc-d0c270e803a6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1216 02:26:52.853246 9897 system_pods.go:89] "registry-creds-764b6fb674-cx22t" [f12a412c-c2cf-4510-8362-2985e7f119b5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1216 02:26:52.853252 9897 system_pods.go:89] "registry-proxy-qx2bk" [ceaffdc5-fb32-4337-a3c4-e6a2a1d6a2b2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1216 02:26:52.853257 9897 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9q7t2" [3bbe3e1d-3ff2-43f5-a29d-006fbdfebbae] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1216 02:26:52.853263 9897 system_pods.go:89] "snapshot-controller-7d9fbc56b8-t2tmt" [b57f6aee-1317-4615-9a50-0c808f07c954] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1216 02:26:52.853267 9897 system_pods.go:89] "storage-provisioner" [2daa6974-1bd3-4976-87ef-c939bb232e93] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1216 02:26:52.853274 9897 system_pods.go:126] duration metric: took 81.71908ms to wait for k8s-apps to be running ...
I1216 02:26:52.853282 9897 system_svc.go:44] waiting for kubelet service to be running ....
I1216 02:26:52.853323 9897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1216 02:26:52.884839 9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1216 02:26:53.205889 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:26:53.206025 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:26:53.339793 9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.896291209s)
I1216 02:26:53.339841 9897 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-703051"
I1216 02:26:53.339861 9897 system_svc.go:56] duration metric: took 486.57168ms WaitForService to wait for kubelet
I1216 02:26:53.339880 9897 kubeadm.go:587] duration metric: took 9.648393662s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1216 02:26:53.339907 9897 node_conditions.go:102] verifying NodePressure condition ...
I1216 02:26:53.339810 9897 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.880911227s)
I1216 02:26:53.341755 9897 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1216 02:26:53.341770 9897 out.go:179] * Verifying csi-hostpath-driver addon...
I1216 02:26:53.343011 9897 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1216 02:26:53.343754 9897 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1216 02:26:53.344156 9897 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1216 02:26:53.344170 9897 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1216 02:26:53.353475 9897 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1216 02:26:53.353494 9897 node_conditions.go:123] node cpu capacity is 2
I1216 02:26:53.353514 9897 node_conditions.go:105] duration metric: took 13.601339ms to run NodePressure ...
I1216 02:26:53.353532 9897 start.go:242] waiting for startup goroutines ...
I1216 02:26:53.364017 9897 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1216 02:26:53.364031 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:26:53.471542 9897 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1216 02:26:53.471565 9897 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1216 02:26:53.539480 9897 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1216 02:26:53.539502 9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1216 02:26:53.614071 9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1216 02:26:53.684148 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:26:53.687214 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:26:53.856377 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:26:54.187075 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:26:54.187110 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:26:54.351984 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:26:54.640966 9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.756084415s)
I1216 02:26:54.715367 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:26:54.715551 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:26:54.787145 9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.173034789s)
I1216 02:26:54.788117 9897 addons.go:495] Verifying addon gcp-auth=true in "addons-703051"
I1216 02:26:54.790120 9897 out.go:179] * Verifying gcp-auth addon...
I1216 02:26:54.791739 9897 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1216 02:26:54.816956 9897 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1216 02:26:54.816979 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:26:54.909749 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:26:55.187509 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:26:55.189822 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:26:55.297247 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:26:55.348628 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:26:55.687973 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:26:55.688529 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:26:55.797538 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:26:55.849598 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:26:56.185902 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:26:56.187088 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:26:56.300657 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:26:56.351244 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:26:56.690616 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:26:56.691152 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:26:56.799607 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:26:56.849765 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:26:57.185827 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:26:57.186015 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:26:57.295668 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:26:57.349110 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:26:57.684651 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:26:57.684841 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:26:57.794498 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:26:57.848958 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:26:58.185196 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:26:58.185230 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:26:58.295221 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:26:58.347505 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:26:58.685391 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:26:58.685449 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:26:58.795197 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:26:58.847075 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:26:59.186223 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:26:59.186913 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:26:59.294718 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:26:59.347543 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:26:59.685125 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:26:59.685247 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:26:59.796746 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:26:59.850024 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:00.187011 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:00.187438 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:00.297289 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:00.349247 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:00.685778 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:00.686893 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:00.795829 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:00.847720 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:01.184718 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:01.185056 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:01.295099 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:01.347150 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:01.685843 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:01.686024 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:01.794738 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:01.848336 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:02.184966 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:02.185151 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:02.294867 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:02.346559 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:02.842554 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:02.843446 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:02.843522 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:02.847202 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:03.185678 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:03.185678 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:03.296334 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:03.349732 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:03.685168 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:03.685594 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:03.796979 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:03.849130 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:04.188829 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:04.188953 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:04.296898 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:04.346951 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:04.684994 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:04.687735 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:04.795524 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:04.849656 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:05.185058 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:05.186278 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:05.296833 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:05.351420 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:05.792635 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:05.794625 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:05.795290 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:05.850965 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:06.186808 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:06.187656 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:06.296302 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:06.397319 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:06.689495 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:06.689671 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:06.795705 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:06.849348 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:07.186562 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:07.187017 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:07.295241 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:07.348427 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:07.686737 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:07.686966 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:07.801014 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:07.848054 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:08.329870 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:08.332882 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:08.333410 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:08.347792 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:08.685082 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:08.685377 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:08.795404 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:08.847436 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:09.188976 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:09.188975 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:09.294643 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:09.347481 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:09.685764 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:09.685764 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:09.794388 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:09.847431 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:10.185775 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:10.185866 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:10.294777 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:10.348197 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:10.690649 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:10.690898 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:10.800538 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:10.850128 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:11.186868 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:11.187434 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:11.295659 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:11.347952 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:11.688049 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:11.688550 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:11.795549 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:11.849498 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:12.185374 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:12.185655 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:12.295595 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:12.347430 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:12.685904 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:12.686352 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:12.794948 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:12.847115 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:13.187709 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:13.187914 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:13.295122 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:13.346918 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:13.684579 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:13.684859 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:13.801090 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:13.848351 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:14.185004 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:14.186551 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:14.296780 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:14.348156 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:14.687182 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:14.687380 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:14.795303 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:14.851137 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:15.186515 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:15.186561 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:15.297688 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:15.352089 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:15.685156 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:15.685740 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:15.794596 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:15.848314 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:16.184611 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 02:27:16.186302 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:16.294991 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:16.347445 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:16.685700 9897 kapi.go:107] duration metric: took 24.004662987s to wait for kubernetes.io/minikube-addons=registry ...
I1216 02:27:16.685833 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:16.794271 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:16.847328 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:17.184986 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:17.294997 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:17.347702 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:17.684886 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:17.794604 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:17.847368 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:18.186797 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:18.296276 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:18.351082 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:18.685612 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:18.796793 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:18.848204 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:19.186384 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:19.295250 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:19.346984 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:19.685158 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:19.797022 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:19.848004 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:20.186138 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:20.296642 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:20.350759 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:20.684853 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:20.795407 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:20.848428 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:21.186530 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:21.294378 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:21.348145 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:21.684963 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:21.797706 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:21.850651 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:22.184615 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:22.295204 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:22.347839 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:22.687154 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:22.796239 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:22.847212 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:23.283533 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:23.298141 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:23.347312 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:23.685234 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:23.801411 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:23.849963 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:24.350276 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:24.351269 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:24.353679 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:24.685369 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:24.796259 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:24.848961 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:25.186255 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:25.544806 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:25.546850 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:25.686399 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:25.795492 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:25.848152 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:26.185471 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:26.296416 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:26.347898 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:26.685532 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:26.796322 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:26.848468 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:27.185263 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:27.294957 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:27.346558 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:27.686794 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:27.798173 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:27.846778 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:28.184446 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:28.296022 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:28.349180 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:28.685100 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:28.795170 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:28.847135 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:29.184818 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:29.294681 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:29.347969 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:29.684450 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:29.795017 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:29.846656 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:30.183963 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:30.294800 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:30.351338 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:30.687364 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:30.796099 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:30.848625 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:31.188061 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:31.296589 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:31.347757 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:31.691796 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:31.795838 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:31.848309 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:32.187836 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:32.297009 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:32.348058 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:32.685840 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:32.915398 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:32.917075 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:33.185129 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:33.296013 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:33.347519 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:33.687396 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:33.795898 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:33.846984 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:34.184497 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:34.295316 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:34.347410 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:34.687168 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:34.796998 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:34.849490 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:35.184825 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:35.296124 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:35.347910 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:36.056810 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:36.057049 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:36.057286 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:36.186890 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:36.294434 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:36.348373 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:36.685126 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:36.794678 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:36.851194 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:37.185074 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:37.294696 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:37.352827 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:37.688224 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:37.797395 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:37.848062 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:38.185685 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:38.295386 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:38.347301 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:38.684534 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:38.796400 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:38.849725 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:39.184652 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:39.296892 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:39.348486 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:39.686450 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:39.796788 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:39.896775 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:40.192122 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:40.296518 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:40.349134 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:40.685644 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:40.799333 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:40.848195 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:41.186352 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:41.295540 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:41.348400 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:41.685529 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:41.800268 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:41.849612 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:42.186015 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:42.296508 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:42.349123 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:42.687979 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:42.942671 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:42.943451 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:43.185915 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:43.297434 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:43.350640 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:43.689457 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:43.795100 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:43.847041 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:44.184856 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:44.294258 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:44.348342 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:44.685214 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:44.795766 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:44.896669 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:45.196476 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:45.296909 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:45.351400 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:45.685703 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:45.795139 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:45.849892 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:46.184376 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:46.295896 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:46.349935 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:46.689901 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:46.797530 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:46.847466 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:47.186759 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:47.298520 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:47.349755 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:47.684796 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:47.797429 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:47.846890 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:48.184617 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:48.295018 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:48.347659 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:48.687101 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:48.795874 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:48.853496 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:49.188199 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:49.295430 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:49.347655 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:49.684133 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:49.794435 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:49.847091 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:50.185834 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:50.295662 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:50.347197 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:50.686367 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:50.796226 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:50.847124 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:51.188617 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:51.296113 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:51.350732 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:51.685086 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:51.795815 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:51.846488 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:52.185811 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:52.295909 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:52.349599 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:52.683964 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:52.795018 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:52.850067 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:53.184650 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:53.295648 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:53.348572 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 02:27:53.685415 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:53.799428 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:53.847349 9897 kapi.go:107] duration metric: took 1m0.503592744s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1216 02:27:54.184770 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:54.295396 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:54.685130 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:54.794940 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:55.184935 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:55.294887 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:55.684754 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:55.795582 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:56.184695 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:56.294549 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:56.684825 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:56.794807 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:57.185516 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:57.296948 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:57.686245 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:57.799129 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:58.185271 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:58.295950 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:58.688024 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:58.795911 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:59.186294 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:59.297363 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:27:59.685095 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:27:59.797417 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:28:00.252892 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:28:00.297124 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:28:00.685557 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:28:00.797350 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:28:01.190671 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:28:01.297256 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:28:01.686596 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:28:01.869623 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:28:02.186168 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:28:02.296745 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:28:02.685566 9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 02:28:02.797045 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:28:03.191349 9897 kapi.go:107] duration metric: took 1m10.510585602s to wait for app.kubernetes.io/name=ingress-nginx ...
I1216 02:28:03.295656 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:28:03.795509 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:28:04.295644 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:28:04.795013 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:28:05.296142 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:28:05.795308 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:28:06.295895 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:28:06.799564 9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 02:28:07.296686 9897 kapi.go:107] duration metric: took 1m12.504944867s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1216 02:28:07.298077 9897 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-703051 cluster.
I1216 02:28:07.299105 9897 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1216 02:28:07.300113 9897 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1216 02:28:07.301132 9897 out.go:179] * Enabled addons: inspektor-gadget, storage-provisioner, ingress-dns, nvidia-device-plugin, amd-gpu-device-plugin, cloud-spanner, default-storageclass, metrics-server, registry-creds, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I1216 02:28:07.302044 9897 addons.go:530] duration metric: took 1m23.610473322s for enable addons: enabled=[inspektor-gadget storage-provisioner ingress-dns nvidia-device-plugin amd-gpu-device-plugin cloud-spanner default-storageclass metrics-server registry-creds yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I1216 02:28:07.302078 9897 start.go:247] waiting for cluster config update ...
I1216 02:28:07.302093 9897 start.go:256] writing updated cluster config ...
I1216 02:28:07.302375 9897 ssh_runner.go:195] Run: rm -f paused
I1216 02:28:07.309144 9897 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1216 02:28:07.312141 9897 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4tgqh" in "kube-system" namespace to be "Ready" or be gone ...
I1216 02:28:07.317308 9897 pod_ready.go:94] pod "coredns-66bc5c9577-4tgqh" is "Ready"
I1216 02:28:07.317325 9897 pod_ready.go:86] duration metric: took 5.162711ms for pod "coredns-66bc5c9577-4tgqh" in "kube-system" namespace to be "Ready" or be gone ...
I1216 02:28:07.318846 9897 pod_ready.go:83] waiting for pod "etcd-addons-703051" in "kube-system" namespace to be "Ready" or be gone ...
I1216 02:28:07.324159 9897 pod_ready.go:94] pod "etcd-addons-703051" is "Ready"
I1216 02:28:07.324182 9897 pod_ready.go:86] duration metric: took 5.315081ms for pod "etcd-addons-703051" in "kube-system" namespace to be "Ready" or be gone ...
I1216 02:28:07.326004 9897 pod_ready.go:83] waiting for pod "kube-apiserver-addons-703051" in "kube-system" namespace to be "Ready" or be gone ...
I1216 02:28:07.331185 9897 pod_ready.go:94] pod "kube-apiserver-addons-703051" is "Ready"
I1216 02:28:07.331204 9897 pod_ready.go:86] duration metric: took 5.180201ms for pod "kube-apiserver-addons-703051" in "kube-system" namespace to be "Ready" or be gone ...
I1216 02:28:07.333159 9897 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-703051" in "kube-system" namespace to be "Ready" or be gone ...
I1216 02:28:07.713347 9897 pod_ready.go:94] pod "kube-controller-manager-addons-703051" is "Ready"
I1216 02:28:07.713373 9897 pod_ready.go:86] duration metric: took 380.194535ms for pod "kube-controller-manager-addons-703051" in "kube-system" namespace to be "Ready" or be gone ...
I1216 02:28:07.913876 9897 pod_ready.go:83] waiting for pod "kube-proxy-mwxm8" in "kube-system" namespace to be "Ready" or be gone ...
I1216 02:28:08.313545 9897 pod_ready.go:94] pod "kube-proxy-mwxm8" is "Ready"
I1216 02:28:08.313569 9897 pod_ready.go:86] duration metric: took 399.67289ms for pod "kube-proxy-mwxm8" in "kube-system" namespace to be "Ready" or be gone ...
I1216 02:28:08.513511 9897 pod_ready.go:83] waiting for pod "kube-scheduler-addons-703051" in "kube-system" namespace to be "Ready" or be gone ...
I1216 02:28:08.913351 9897 pod_ready.go:94] pod "kube-scheduler-addons-703051" is "Ready"
I1216 02:28:08.913373 9897 pod_ready.go:86] duration metric: took 399.840179ms for pod "kube-scheduler-addons-703051" in "kube-system" namespace to be "Ready" or be gone ...
I1216 02:28:08.913383 9897 pod_ready.go:40] duration metric: took 1.604212346s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1216 02:28:08.958847 9897 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
I1216 02:28:08.960560 9897 out.go:179] * Done! kubectl is now configured to use "addons-703051" cluster and "default" namespace by default
==> CRI-O <==
Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.113536256Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f979fb2d-e6e4-4bed-a093-e092231dcee6 name=/runtime.v1.RuntimeService/ListContainers
Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.113754496Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f979fb2d-e6e4-4bed-a093-e092231dcee6 name=/runtime.v1.RuntimeService/ListContainers
Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.114526481Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71f3caa0dccaaf15497314a0ad0e9ccbf3feff771407550dfdde32cffc5bb271,PodSandboxId:763cd2ef52e008c64f72b0e0585a9bddf5ab53d237c0ba56ffa04410abe9e9e7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765852130940239679,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ecb2063-1677-48a3-8f27-ea2c7d5c93c6,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0fbe6406af877a88a5083785c255d7e941dc2100c87d2fc6cfca0295fcbf1ed,PodSandboxId:cc4c3965fc757dd2531aee75116597d3e9e942c22508cbb90365f3a8debd3d62,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765852093102579336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58286223-3023-49e8-8c96-fbc4885799ab,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ce59b95ef32ae51897233591dba179882a1f2328a1d30205e066849cf2a740,PodSandboxId:d42faa16e2e656936396d9bdfb7b4ab0880c76777ce14a42e2c74f01a30a1629,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765852082870372180,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-shbcn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 09f33b70-4a2a-46bb-a669-c394ee4e50c4,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:fab5c50c31014edbab8ede19b9ad93d96b39f7bbe924aede82f25a1e0aa588aa,PodSandboxId:10bfcaff868baf43197a9a8bb30550d60d00cffdc9fb220f9860726512f39291,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765852064753226316,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-srpnh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4639c7bb-d0f5-4de6-806a-ff36ea0d752d,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087e5f624b27317d64265d94dfc14af129d6a16e2f2c8f8e3d8e80a9b212dbf0,PodSandboxId:422ae0ee2fb42db23a7cf7b51ec47d90d301557c04ce1fded8d56dde54cf7204,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765852061239569953,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-vvbgk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0cdb4ee2-5764-461b-84ca-30bb4d8bc4a1,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23048578564ae1325eaf7e84c45f992ede79ae60b89ec635455ddbd5863f8280,PodSandboxId:6a4e7dd7c377bb16d5fe8b55760021f98ecdda13d1ec76aaa682b9fbab40faf5,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765852048345143921,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-b7pbm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 77521070-52a4-4796-8faf-799cc1b59cf3,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0454ea8b922d25a2c9ca5713444c8978a81d6f934fa812292f554fcb1c77b80a,PodSandboxId:e8bbdea8ec749fac6e023c0ddb4527803d3bdbbaa3940c3a85425ccd6de375fe,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6c
d76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765852045655169508,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e6df2b-852c-4522-9103-54ca55c8c849,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bdaf5c98cdf2241fc96d2c03e80eae9e6104c632934288d1163b5799e7b6fed,PodSandboxId:350b7ed773819dc009f07d78eaba173d2aef24c1a6bd11602c3ba477c728be21,Metada
ta:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765852020565783471,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-4fpsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03ef77d5-d326-4953-8e23-ca6c08e8e512,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68acdb398858e7f163118bca702ed13a99eefaee99e6d8be83f3af1ec90af7b,PodSandboxId:72fd8159eb8bc5dcf2d94955296dc5d
9178c97870421baf64f47c79cbd89db57,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765852011543363498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2daa6974-1bd3-4976-87ef-c939bb232e93,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0388da6adb851f0d46886e354c87f018e40e3a963fb20cccea3430b926c6eccc,PodSandboxId:612b308ed418e0081ba2e4d9eae66bb5bf4a82aab17
4b9b7164ed9dd0dc8bd78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765852005645418287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4tgqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edb0229-7f11-4e58-90a8-01dc7c8fe069,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.ku
bernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1947cc0b3ab5e45d542ffca511b910b70e9b09ab19381d77587d1ffa064d6217,PodSandboxId:7b4be50a94b2d8987deaa5ee13f1789213654058adee5ef63a2d716a0c1ba8fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765852004934657951,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwxm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 064f9463-1ca7-46b8-8428-a3450e6a50a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f075f2bc2541e1114ca94b40540568b39a2ac8c5851d908739adbca47426b40,PodSandboxId:6058440c99aced855334ebbda5756776003af605bd06cb693332dbd9d0bb621f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765851991956369609,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-703051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245b1a6750f2209a55a4a1baaaa78ec8,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\
",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20246f1c56f2b5aef9e3deb5898ea3b6f1a8c8732832f664d0c1ce79f0e058d4,PodSandboxId:62ceeff5c74e41e9cd844e4626e237f47ea62c7547a8bce67f18d048891c3762,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765851991963554720,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-703051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8811ee76d1eb677a2bf71e866b381a00,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4ee09f2d08e04fafa64043bf9167661a9fac75aa2828ba1df68e1e9ac9d42d,PodSandboxId:e20407cf0464e587a7706f66ab4840fc86f700c347fdff7d35da90102aae0f09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765851991901228998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-703051,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: a94cf4464d1003900ad58539e89badef,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:960abe7ee91b99d086a9808387f532e974c02cfe22515b93b199b699e1435675,PodSandboxId:825ff8245730e9248a69f1bbe4fef00205d0bd8900b2f07c16a68a75156e5031,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765851991892436142,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-703051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 986649e00b22f8edb5a55a6ff7bf1f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f979fb2d-e6e4-4bed-a093-e092231dcee6 name=/runtime.v1.RuntimeService/ListContainers
Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.135459817Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/1.0" file="docker/docker_client.go:631"
Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.147933925Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d1b80e6-dcab-4733-893c-25955cbe27f0 name=/runtime.v1.RuntimeService/Version
Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.148147444Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d1b80e6-dcab-4733-893c-25955cbe27f0 name=/runtime.v1.RuntimeService/Version
Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.149809250Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=31bff1ea-17bf-49e0-b903-848c69793724 name=/runtime.v1.ImageService/ImageFsInfo
Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.150951948Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765852273150928936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=31bff1ea-17bf-49e0-b903-848c69793724 name=/runtime.v1.ImageService/ImageFsInfo
Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.151855848Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4004ff82-0029-4e68-90f4-afaebb61cbb9 name=/runtime.v1.RuntimeService/ListContainers
Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.151922694Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4004ff82-0029-4e68-90f4-afaebb61cbb9 name=/runtime.v1.RuntimeService/ListContainers
Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.152233721Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71f3caa0dccaaf15497314a0ad0e9ccbf3feff771407550dfdde32cffc5bb271,PodSandboxId:763cd2ef52e008c64f72b0e0585a9bddf5ab53d237c0ba56ffa04410abe9e9e7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765852130940239679,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ecb2063-1677-48a3-8f27-ea2c7d5c93c6,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0fbe6406af877a88a5083785c255d7e941dc2100c87d2fc6cfca0295fcbf1ed,PodSandboxId:cc4c3965fc757dd2531aee75116597d3e9e942c22508cbb90365f3a8debd3d62,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765852093102579336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58286223-3023-49e8-8c96-fbc4885799ab,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ce59b95ef32ae51897233591dba179882a1f2328a1d30205e066849cf2a740,PodSandboxId:d42faa16e2e656936396d9bdfb7b4ab0880c76777ce14a42e2c74f01a30a1629,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765852082870372180,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-shbcn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 09f33b70-4a2a-46bb-a669-c394ee4e50c4,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:fab5c50c31014edbab8ede19b9ad93d96b39f7bbe924aede82f25a1e0aa588aa,PodSandboxId:10bfcaff868baf43197a9a8bb30550d60d00cffdc9fb220f9860726512f39291,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765852064753226316,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-srpnh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4639c7bb-d0f5-4de6-806a-ff36ea0d752d,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087e5f624b27317d64265d94dfc14af129d6a16e2f2c8f8e3d8e80a9b212dbf0,PodSandboxId:422ae0ee2fb42db23a7cf7b51ec47d90d301557c04ce1fded8d56dde54cf7204,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765852061239569953,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-vvbgk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0cdb4ee2-5764-461b-84ca-30bb4d8bc4a1,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23048578564ae1325eaf7e84c45f992ede79ae60b89ec635455ddbd5863f8280,PodSandboxId:6a4e7dd7c377bb16d5fe8b55760021f98ecdda13d1ec76aaa682b9fbab40faf5,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765852048345143921,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-b7pbm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 77521070-52a4-4796-8faf-799cc1b59cf3,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0454ea8b922d25a2c9ca5713444c8978a81d6f934fa812292f554fcb1c77b80a,PodSandboxId:e8bbdea8ec749fac6e023c0ddb4527803d3bdbbaa3940c3a85425ccd6de375fe,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6c
d76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765852045655169508,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e6df2b-852c-4522-9103-54ca55c8c849,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bdaf5c98cdf2241fc96d2c03e80eae9e6104c632934288d1163b5799e7b6fed,PodSandboxId:350b7ed773819dc009f07d78eaba173d2aef24c1a6bd11602c3ba477c728be21,Metada
ta:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765852020565783471,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-4fpsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03ef77d5-d326-4953-8e23-ca6c08e8e512,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68acdb398858e7f163118bca702ed13a99eefaee99e6d8be83f3af1ec90af7b,PodSandboxId:72fd8159eb8bc5dcf2d94955296dc5d
9178c97870421baf64f47c79cbd89db57,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765852011543363498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2daa6974-1bd3-4976-87ef-c939bb232e93,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0388da6adb851f0d46886e354c87f018e40e3a963fb20cccea3430b926c6eccc,PodSandboxId:612b308ed418e0081ba2e4d9eae66bb5bf4a82aab17
4b9b7164ed9dd0dc8bd78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765852005645418287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4tgqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edb0229-7f11-4e58-90a8-01dc7c8fe069,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.ku
bernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1947cc0b3ab5e45d542ffca511b910b70e9b09ab19381d77587d1ffa064d6217,PodSandboxId:7b4be50a94b2d8987deaa5ee13f1789213654058adee5ef63a2d716a0c1ba8fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765852004934657951,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwxm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 064f9463-1ca7-46b8-8428-a3450e6a50a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f075f2bc2541e1114ca94b40540568b39a2ac8c5851d908739adbca47426b40,PodSandboxId:6058440c99aced855334ebbda5756776003af605bd06cb693332dbd9d0bb621f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765851991956369609,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-703051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245b1a6750f2209a55a4a1baaaa78ec8,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\
",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20246f1c56f2b5aef9e3deb5898ea3b6f1a8c8732832f664d0c1ce79f0e058d4,PodSandboxId:62ceeff5c74e41e9cd844e4626e237f47ea62c7547a8bce67f18d048891c3762,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765851991963554720,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-703051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8811ee76d1eb677a2bf71e866b381a00,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4ee09f2d08e04fafa64043bf9167661a9fac75aa2828ba1df68e1e9ac9d42d,PodSandboxId:e20407cf0464e587a7706f66ab4840fc86f700c347fdff7d35da90102aae0f09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765851991901228998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-703051,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: a94cf4464d1003900ad58539e89badef,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:960abe7ee91b99d086a9808387f532e974c02cfe22515b93b199b699e1435675,PodSandboxId:825ff8245730e9248a69f1bbe4fef00205d0bd8900b2f07c16a68a75156e5031,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765851991892436142,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-703051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 986649e00b22f8edb5a55a6ff7bf1f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4004ff82-0029-4e68-90f4-afaebb61cbb9 name=/runtime.v1.RuntimeService/ListContainers
Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.183804962Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fb43fcf6-dd82-496a-b427-b4384ee02d35 name=/runtime.v1.RuntimeService/Version
Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.183874464Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fb43fcf6-dd82-496a-b427-b4384ee02d35 name=/runtime.v1.RuntimeService/Version
Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.185176700Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=009dda2d-fdbc-4d9e-9463-714d3b35356d name=/runtime.v1.ImageService/ImageFsInfo
Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.186965520Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765852273186879371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=009dda2d-fdbc-4d9e-9463-714d3b35356d name=/runtime.v1.ImageService/ImageFsInfo
Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.187979887Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16bad539-0e62-48c7-80d5-25ac99a91990 name=/runtime.v1.RuntimeService/ListContainers
Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.188033558Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16bad539-0e62-48c7-80d5-25ac99a91990 name=/runtime.v1.RuntimeService/ListContainers
Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.188320826Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71f3caa0dccaaf15497314a0ad0e9ccbf3feff771407550dfdde32cffc5bb271,PodSandboxId:763cd2ef52e008c64f72b0e0585a9bddf5ab53d237c0ba56ffa04410abe9e9e7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765852130940239679,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ecb2063-1677-48a3-8f27-ea2c7d5c93c6,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0fbe6406af877a88a5083785c255d7e941dc2100c87d2fc6cfca0295fcbf1ed,PodSandboxId:cc4c3965fc757dd2531aee75116597d3e9e942c22508cbb90365f3a8debd3d62,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765852093102579336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58286223-3023-49e8-8c96-fbc4885799ab,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ce59b95ef32ae51897233591dba179882a1f2328a1d30205e066849cf2a740,PodSandboxId:d42faa16e2e656936396d9bdfb7b4ab0880c76777ce14a42e2c74f01a30a1629,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765852082870372180,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-shbcn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 09f33b70-4a2a-46bb-a669-c394ee4e50c4,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:fab5c50c31014edbab8ede19b9ad93d96b39f7bbe924aede82f25a1e0aa588aa,PodSandboxId:10bfcaff868baf43197a9a8bb30550d60d00cffdc9fb220f9860726512f39291,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765852064753226316,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-srpnh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4639c7bb-d0f5-4de6-806a-ff36ea0d752d,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087e5f624b27317d64265d94dfc14af129d6a16e2f2c8f8e3d8e80a9b212dbf0,PodSandboxId:422ae0ee2fb42db23a7cf7b51ec47d90d301557c04ce1fded8d56dde54cf7204,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765852061239569953,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-vvbgk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0cdb4ee2-5764-461b-84ca-30bb4d8bc4a1,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23048578564ae1325eaf7e84c45f992ede79ae60b89ec635455ddbd5863f8280,PodSandboxId:6a4e7dd7c377bb16d5fe8b55760021f98ecdda13d1ec76aaa682b9fbab40faf5,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765852048345143921,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-b7pbm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 77521070-52a4-4796-8faf-799cc1b59cf3,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0454ea8b922d25a2c9ca5713444c8978a81d6f934fa812292f554fcb1c77b80a,PodSandboxId:e8bbdea8ec749fac6e023c0ddb4527803d3bdbbaa3940c3a85425ccd6de375fe,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6c
d76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765852045655169508,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e6df2b-852c-4522-9103-54ca55c8c849,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bdaf5c98cdf2241fc96d2c03e80eae9e6104c632934288d1163b5799e7b6fed,PodSandboxId:350b7ed773819dc009f07d78eaba173d2aef24c1a6bd11602c3ba477c728be21,Metada
ta:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765852020565783471,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-4fpsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03ef77d5-d326-4953-8e23-ca6c08e8e512,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68acdb398858e7f163118bca702ed13a99eefaee99e6d8be83f3af1ec90af7b,PodSandboxId:72fd8159eb8bc5dcf2d94955296dc5d
9178c97870421baf64f47c79cbd89db57,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765852011543363498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2daa6974-1bd3-4976-87ef-c939bb232e93,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0388da6adb851f0d46886e354c87f018e40e3a963fb20cccea3430b926c6eccc,PodSandboxId:612b308ed418e0081ba2e4d9eae66bb5bf4a82aab17
4b9b7164ed9dd0dc8bd78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765852005645418287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4tgqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edb0229-7f11-4e58-90a8-01dc7c8fe069,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.ku
bernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1947cc0b3ab5e45d542ffca511b910b70e9b09ab19381d77587d1ffa064d6217,PodSandboxId:7b4be50a94b2d8987deaa5ee13f1789213654058adee5ef63a2d716a0c1ba8fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765852004934657951,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwxm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 064f9463-1ca7-46b8-8428-a3450e6a50a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f075f2bc2541e1114ca94b40540568b39a2ac8c5851d908739adbca47426b40,PodSandboxId:6058440c99aced855334ebbda5756776003af605bd06cb693332dbd9d0bb621f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765851991956369609,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-703051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245b1a6750f2209a55a4a1baaaa78ec8,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\
",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20246f1c56f2b5aef9e3deb5898ea3b6f1a8c8732832f664d0c1ce79f0e058d4,PodSandboxId:62ceeff5c74e41e9cd844e4626e237f47ea62c7547a8bce67f18d048891c3762,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765851991963554720,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-703051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8811ee76d1eb677a2bf71e866b381a00,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4ee09f2d08e04fafa64043bf9167661a9fac75aa2828ba1df68e1e9ac9d42d,PodSandboxId:e20407cf0464e587a7706f66ab4840fc86f700c347fdff7d35da90102aae0f09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765851991901228998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-703051,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: a94cf4464d1003900ad58539e89badef,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:960abe7ee91b99d086a9808387f532e974c02cfe22515b93b199b699e1435675,PodSandboxId:825ff8245730e9248a69f1bbe4fef00205d0bd8900b2f07c16a68a75156e5031,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765851991892436142,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-703051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 986649e00b22f8edb5a55a6ff7bf1f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16bad539-0e62-48c7-80d5-25ac99a91990 name=/runtime.v1.RuntimeService/ListContainers
Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.221146300Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=34b0ffdd-5adf-447b-a913-3331bb5db8ce name=/runtime.v1.RuntimeService/Version
Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.221270623Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=34b0ffdd-5adf-447b-a913-3331bb5db8ce name=/runtime.v1.RuntimeService/Version
Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.222556416Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=324581e4-cc66-462d-bb24-01ebc7ccaf00 name=/runtime.v1.ImageService/ImageFsInfo
Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.223768627Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765852273223743429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=324581e4-cc66-462d-bb24-01ebc7ccaf00 name=/runtime.v1.ImageService/ImageFsInfo
Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.224404455Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e3f8eeb-b76a-4aae-bbc3-4a7aeca853f2 name=/runtime.v1.RuntimeService/ListContainers
Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.224484175Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e3f8eeb-b76a-4aae-bbc3-4a7aeca853f2 name=/runtime.v1.RuntimeService/ListContainers
Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.224835335Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71f3caa0dccaaf15497314a0ad0e9ccbf3feff771407550dfdde32cffc5bb271,PodSandboxId:763cd2ef52e008c64f72b0e0585a9bddf5ab53d237c0ba56ffa04410abe9e9e7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765852130940239679,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ecb2063-1677-48a3-8f27-ea2c7d5c93c6,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0fbe6406af877a88a5083785c255d7e941dc2100c87d2fc6cfca0295fcbf1ed,PodSandboxId:cc4c3965fc757dd2531aee75116597d3e9e942c22508cbb90365f3a8debd3d62,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765852093102579336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58286223-3023-49e8-8c96-fbc4885799ab,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ce59b95ef32ae51897233591dba179882a1f2328a1d30205e066849cf2a740,PodSandboxId:d42faa16e2e656936396d9bdfb7b4ab0880c76777ce14a42e2c74f01a30a1629,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765852082870372180,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-shbcn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 09f33b70-4a2a-46bb-a669-c394ee4e50c4,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:fab5c50c31014edbab8ede19b9ad93d96b39f7bbe924aede82f25a1e0aa588aa,PodSandboxId:10bfcaff868baf43197a9a8bb30550d60d00cffdc9fb220f9860726512f39291,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765852064753226316,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-srpnh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4639c7bb-d0f5-4de6-806a-ff36ea0d752d,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087e5f624b27317d64265d94dfc14af129d6a16e2f2c8f8e3d8e80a9b212dbf0,PodSandboxId:422ae0ee2fb42db23a7cf7b51ec47d90d301557c04ce1fded8d56dde54cf7204,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765852061239569953,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-vvbgk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0cdb4ee2-5764-461b-84ca-30bb4d8bc4a1,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23048578564ae1325eaf7e84c45f992ede79ae60b89ec635455ddbd5863f8280,PodSandboxId:6a4e7dd7c377bb16d5fe8b55760021f98ecdda13d1ec76aaa682b9fbab40faf5,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765852048345143921,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-b7pbm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 77521070-52a4-4796-8faf-799cc1b59cf3,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0454ea8b922d25a2c9ca5713444c8978a81d6f934fa812292f554fcb1c77b80a,PodSandboxId:e8bbdea8ec749fac6e023c0ddb4527803d3bdbbaa3940c3a85425ccd6de375fe,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6c
d76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765852045655169508,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e6df2b-852c-4522-9103-54ca55c8c849,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bdaf5c98cdf2241fc96d2c03e80eae9e6104c632934288d1163b5799e7b6fed,PodSandboxId:350b7ed773819dc009f07d78eaba173d2aef24c1a6bd11602c3ba477c728be21,Metada
ta:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765852020565783471,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-4fpsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03ef77d5-d326-4953-8e23-ca6c08e8e512,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68acdb398858e7f163118bca702ed13a99eefaee99e6d8be83f3af1ec90af7b,PodSandboxId:72fd8159eb8bc5dcf2d94955296dc5d
9178c97870421baf64f47c79cbd89db57,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765852011543363498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2daa6974-1bd3-4976-87ef-c939bb232e93,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0388da6adb851f0d46886e354c87f018e40e3a963fb20cccea3430b926c6eccc,PodSandboxId:612b308ed418e0081ba2e4d9eae66bb5bf4a82aab17
4b9b7164ed9dd0dc8bd78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765852005645418287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4tgqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edb0229-7f11-4e58-90a8-01dc7c8fe069,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.ku
bernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1947cc0b3ab5e45d542ffca511b910b70e9b09ab19381d77587d1ffa064d6217,PodSandboxId:7b4be50a94b2d8987deaa5ee13f1789213654058adee5ef63a2d716a0c1ba8fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765852004934657951,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwxm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 064f9463-1ca7-46b8-8428-a3450e6a50a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f075f2bc2541e1114ca94b40540568b39a2ac8c5851d908739adbca47426b40,PodSandboxId:6058440c99aced855334ebbda5756776003af605bd06cb693332dbd9d0bb621f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765851991956369609,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-703051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245b1a6750f2209a55a4a1baaaa78ec8,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\
",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20246f1c56f2b5aef9e3deb5898ea3b6f1a8c8732832f664d0c1ce79f0e058d4,PodSandboxId:62ceeff5c74e41e9cd844e4626e237f47ea62c7547a8bce67f18d048891c3762,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765851991963554720,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-703051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8811ee76d1eb677a2bf71e866b381a00,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4ee09f2d08e04fafa64043bf9167661a9fac75aa2828ba1df68e1e9ac9d42d,PodSandboxId:e20407cf0464e587a7706f66ab4840fc86f700c347fdff7d35da90102aae0f09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765851991901228998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-703051,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: a94cf4464d1003900ad58539e89badef,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:960abe7ee91b99d086a9808387f532e974c02cfe22515b93b199b699e1435675,PodSandboxId:825ff8245730e9248a69f1bbe4fef00205d0bd8900b2f07c16a68a75156e5031,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765851991892436142,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-703051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 986649e00b22f8edb5a55a6ff7bf1f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e3f8eeb-b76a-4aae-bbc3-4a7aeca853f2 name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
71f3caa0dccaa public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff 2 minutes ago Running nginx 0 763cd2ef52e00 nginx default
b0fbe6406af87 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 3 minutes ago Running busybox 0 cc4c3965fc757 busybox default
e2ce59b95ef32 registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad 3 minutes ago Running controller 0 d42faa16e2e65 ingress-nginx-controller-85d4c799dd-shbcn ingress-nginx
fab5c50c31014 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285 3 minutes ago Exited patch 0 10bfcaff868ba ingress-nginx-admission-patch-srpnh ingress-nginx
087e5f624b273 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285 3 minutes ago Exited create 0 422ae0ee2fb42 ingress-nginx-admission-create-vvbgk ingress-nginx
23048578564ae docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef 3 minutes ago Running local-path-provisioner 0 6a4e7dd7c377b local-path-provisioner-648f6765c9-b7pbm local-path-storage
0454ea8b922d2 docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 3 minutes ago Running minikube-ingress-dns 0 e8bbdea8ec749 kube-ingress-dns-minikube kube-system
8bdaf5c98cdf2 docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 350b7ed773819 amd-gpu-device-plugin-4fpsx kube-system
c68acdb398858 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 72fd8159eb8bc storage-provisioner kube-system
0388da6adb851 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969 4 minutes ago Running coredns 0 612b308ed418e coredns-66bc5c9577-4tgqh kube-system
1947cc0b3ab5e 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45 4 minutes ago Running kube-proxy 0 7b4be50a94b2d kube-proxy-mwxm8 kube-system
20246f1c56f2b a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1 4 minutes ago Running etcd 0 62ceeff5c74e4 etcd-addons-703051 kube-system
5f075f2bc2541 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952 4 minutes ago Running kube-scheduler 0 6058440c99ace kube-scheduler-addons-703051 kube-system
fc4ee09f2d08e a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85 4 minutes ago Running kube-apiserver 0 e20407cf0464e kube-apiserver-addons-703051 kube-system
960abe7ee91b9 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8 4 minutes ago Running kube-controller-manager 0 825ff8245730e kube-controller-manager-addons-703051 kube-system
==> coredns [0388da6adb851f0d46886e354c87f018e40e3a963fb20cccea3430b926c6eccc] <==
[INFO] 10.244.0.8:49772 - 36756 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000569298s
[INFO] 10.244.0.8:49772 - 13828 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.001973974s
[INFO] 10.244.0.8:49772 - 25037 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000128971s
[INFO] 10.244.0.8:49772 - 19052 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000119224s
[INFO] 10.244.0.8:49772 - 1028 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000093758s
[INFO] 10.244.0.8:49772 - 52630 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000423177s
[INFO] 10.244.0.8:49772 - 32871 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000343165s
[INFO] 10.244.0.8:41144 - 60468 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000144941s
[INFO] 10.244.0.8:41144 - 60115 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000200543s
[INFO] 10.244.0.8:60638 - 13222 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000103536s
[INFO] 10.244.0.8:60638 - 12978 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000250688s
[INFO] 10.244.0.8:56950 - 4105 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000111668s
[INFO] 10.244.0.8:56950 - 4540 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000195887s
[INFO] 10.244.0.8:49764 - 61881 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000093118s
[INFO] 10.244.0.8:49764 - 62076 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000411734s
[INFO] 10.244.0.23:46640 - 59151 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000634874s
[INFO] 10.244.0.23:38898 - 53246 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000228146s
[INFO] 10.244.0.23:59176 - 40575 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000280787s
[INFO] 10.244.0.23:49933 - 48090 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000110978s
[INFO] 10.244.0.23:34431 - 55636 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000139635s
[INFO] 10.244.0.23:57158 - 59196 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000260344s
[INFO] 10.244.0.23:56169 - 37233 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00145696s
[INFO] 10.244.0.23:38331 - 48625 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.002505553s
[INFO] 10.244.0.28:40445 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000567211s
[INFO] 10.244.0.28:56782 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000283659s
==> describe nodes <==
Name: addons-703051
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-703051
kubernetes.io/os=linux
minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
minikube.k8s.io/name=addons-703051
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_16T02_26_39_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-703051
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 16 Dec 2025 02:26:35 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-703051
AcquireTime: <unset>
RenewTime: Tue, 16 Dec 2025 02:31:04 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 16 Dec 2025 02:29:12 +0000 Tue, 16 Dec 2025 02:26:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 16 Dec 2025 02:29:12 +0000 Tue, 16 Dec 2025 02:26:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 16 Dec 2025 02:29:12 +0000 Tue, 16 Dec 2025 02:26:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 16 Dec 2025 02:29:12 +0000 Tue, 16 Dec 2025 02:26:39 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.237
Hostname: addons-703051
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
System Info:
Machine ID: c4ab45a7215f430ebc6b14f6c9c94339
System UUID: c4ab45a7-215f-430e-bc6b-14f6c9c94339
Boot ID: 354609bf-8610-43fc-90e7-f3e35d0f06fe
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.34.2
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (14 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m4s
default hello-world-app-5d498dc89-8b4zv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 1s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m28s
ingress-nginx ingress-nginx-controller-85d4c799dd-shbcn 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m21s
kube-system amd-gpu-device-plugin-4fpsx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m27s
kube-system coredns-66bc5c9577-4tgqh 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m29s
kube-system etcd-addons-703051 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m35s
kube-system kube-apiserver-addons-703051 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m36s
kube-system kube-controller-manager-addons-703051 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m35s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m24s
kube-system kube-proxy-mwxm8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m29s
kube-system kube-scheduler-addons-703051 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m35s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m24s
local-path-storage local-path-provisioner-648f6765c9-b7pbm 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m23s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m27s kube-proxy
Normal Starting 4m42s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m42s (x8 over 4m42s) kubelet Node addons-703051 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m42s (x8 over 4m42s) kubelet Node addons-703051 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m42s (x7 over 4m42s) kubelet Node addons-703051 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m42s kubelet Updated Node Allocatable limit across pods
Normal Starting 4m35s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m35s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m35s kubelet Node addons-703051 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m35s kubelet Node addons-703051 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m35s kubelet Node addons-703051 status is now: NodeHasSufficientPID
Normal NodeReady 4m34s kubelet Node addons-703051 status is now: NodeReady
Normal RegisteredNode 4m30s node-controller Node addons-703051 event: Registered Node addons-703051 in Controller
==> dmesg <==
[ +0.000033] kauditd_printk_skb: 348 callbacks suppressed
[ +0.741289] kauditd_printk_skb: 428 callbacks suppressed
[Dec16 02:27] kauditd_printk_skb: 227 callbacks suppressed
[ +5.656396] kauditd_printk_skb: 5 callbacks suppressed
[ +5.843452] kauditd_printk_skb: 38 callbacks suppressed
[ +13.586585] kauditd_printk_skb: 32 callbacks suppressed
[ +7.937861] kauditd_printk_skb: 20 callbacks suppressed
[ +6.114481] kauditd_printk_skb: 107 callbacks suppressed
[ +1.535550] kauditd_printk_skb: 85 callbacks suppressed
[ +1.602796] kauditd_printk_skb: 146 callbacks suppressed
[ +0.000039] kauditd_printk_skb: 35 callbacks suppressed
[Dec16 02:28] kauditd_printk_skb: 65 callbacks suppressed
[ +0.000029] kauditd_printk_skb: 38 callbacks suppressed
[ +5.311996] kauditd_printk_skb: 47 callbacks suppressed
[ +0.000089] kauditd_printk_skb: 22 callbacks suppressed
[ +0.779101] kauditd_printk_skb: 107 callbacks suppressed
[ +0.002160] kauditd_printk_skb: 90 callbacks suppressed
[ +2.169211] kauditd_printk_skb: 159 callbacks suppressed
[ +0.661709] kauditd_printk_skb: 162 callbacks suppressed
[ +0.932157] kauditd_printk_skb: 36 callbacks suppressed
[Dec16 02:29] kauditd_printk_skb: 24 callbacks suppressed
[ +8.989939] kauditd_printk_skb: 41 callbacks suppressed
[ +0.000063] kauditd_printk_skb: 10 callbacks suppressed
[ +6.859218] kauditd_printk_skb: 41 callbacks suppressed
[Dec16 02:31] kauditd_printk_skb: 127 callbacks suppressed
==> etcd [20246f1c56f2b5aef9e3deb5898ea3b6f1a8c8732832f664d0c1ce79f0e058d4] <==
{"level":"warn","ts":"2025-12-16T02:27:36.045599Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-16T02:27:35.678350Z","time spent":"367.239105ms","remote":"127.0.0.1:48036","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
{"level":"warn","ts":"2025-12-16T02:27:36.045744Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"189.788142ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-16T02:27:36.045765Z","caller":"traceutil/trace.go:172","msg":"trace[549298602] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1021; }","duration":"189.80812ms","start":"2025-12-16T02:27:35.855950Z","end":"2025-12-16T02:27:36.045759Z","steps":["trace[549298602] 'range keys from in-memory index tree' (duration: 189.745918ms)"],"step_count":1}
{"level":"info","ts":"2025-12-16T02:27:42.931007Z","caller":"traceutil/trace.go:172","msg":"trace[689026313] linearizableReadLoop","detail":"{readStateIndex:1081; appliedIndex:1081; }","duration":"141.335654ms","start":"2025-12-16T02:27:42.789655Z","end":"2025-12-16T02:27:42.930991Z","steps":["trace[689026313] 'read index received' (duration: 141.331405ms)","trace[689026313] 'applied index is now lower than readState.Index' (duration: 3.399µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-16T02:27:42.934890Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.451864ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-16T02:27:42.934953Z","caller":"traceutil/trace.go:172","msg":"trace[531332558] range","detail":"{range_begin:/registry/rolebindings; range_end:; response_count:0; response_revision:1055; }","duration":"118.528016ms","start":"2025-12-16T02:27:42.816415Z","end":"2025-12-16T02:27:42.934943Z","steps":["trace[531332558] 'agreement among raft nodes before linearized reading' (duration: 118.434159ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-16T02:27:42.934890Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"145.213551ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-16T02:27:42.935003Z","caller":"traceutil/trace.go:172","msg":"trace[1860302882] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1055; }","duration":"145.343126ms","start":"2025-12-16T02:27:42.789651Z","end":"2025-12-16T02:27:42.934994Z","steps":["trace[1860302882] 'agreement among raft nodes before linearized reading' (duration: 141.962033ms)"],"step_count":1}
{"level":"info","ts":"2025-12-16T02:27:51.540319Z","caller":"traceutil/trace.go:172","msg":"trace[1135183922] transaction","detail":"{read_only:false; response_revision:1136; number_of_response:1; }","duration":"105.510637ms","start":"2025-12-16T02:27:51.434793Z","end":"2025-12-16T02:27:51.540304Z","steps":["trace[1135183922] 'process raft request' (duration: 105.423187ms)"],"step_count":1}
{"level":"info","ts":"2025-12-16T02:28:00.242867Z","caller":"traceutil/trace.go:172","msg":"trace[1306737295] linearizableReadLoop","detail":"{readStateIndex:1189; appliedIndex:1189; }","duration":"182.311851ms","start":"2025-12-16T02:28:00.060540Z","end":"2025-12-16T02:28:00.242852Z","steps":["trace[1306737295] 'read index received' (duration: 182.302747ms)","trace[1306737295] 'applied index is now lower than readState.Index' (duration: 8.008µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-16T02:28:00.244858Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.305258ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-16T02:28:00.244911Z","caller":"traceutil/trace.go:172","msg":"trace[741134019] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions; range_end:; response_count:0; response_revision:1159; }","duration":"184.365266ms","start":"2025-12-16T02:28:00.060537Z","end":"2025-12-16T02:28:00.244902Z","steps":["trace[741134019] 'agreement among raft nodes before linearized reading' (duration: 182.394603ms)"],"step_count":1}
{"level":"info","ts":"2025-12-16T02:28:00.245204Z","caller":"traceutil/trace.go:172","msg":"trace[764059712] transaction","detail":"{read_only:false; response_revision:1160; number_of_response:1; }","duration":"188.392234ms","start":"2025-12-16T02:28:00.056804Z","end":"2025-12-16T02:28:00.245196Z","steps":["trace[764059712] 'process raft request' (duration: 186.155367ms)"],"step_count":1}
{"level":"info","ts":"2025-12-16T02:28:31.553762Z","caller":"traceutil/trace.go:172","msg":"trace[1706003156] transaction","detail":"{read_only:false; response_revision:1358; number_of_response:1; }","duration":"147.959421ms","start":"2025-12-16T02:28:31.405783Z","end":"2025-12-16T02:28:31.553743Z","steps":["trace[1706003156] 'process raft request' (duration: 147.161094ms)"],"step_count":1}
{"level":"info","ts":"2025-12-16T02:28:33.791128Z","caller":"traceutil/trace.go:172","msg":"trace[1821992009] transaction","detail":"{read_only:false; response_revision:1361; number_of_response:1; }","duration":"204.991733ms","start":"2025-12-16T02:28:33.586124Z","end":"2025-12-16T02:28:33.791116Z","steps":["trace[1821992009] 'process raft request' (duration: 204.843372ms)"],"step_count":1}
{"level":"info","ts":"2025-12-16T02:28:41.425366Z","caller":"traceutil/trace.go:172","msg":"trace[1597314245] linearizableReadLoop","detail":"{readStateIndex:1475; appliedIndex:1475; }","duration":"252.486496ms","start":"2025-12-16T02:28:41.172823Z","end":"2025-12-16T02:28:41.425310Z","steps":["trace[1597314245] 'read index received' (duration: 252.478072ms)","trace[1597314245] 'applied index is now lower than readState.Index' (duration: 7.257µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-16T02:28:41.426010Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"352.450732ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account\" limit:1 ","response":"range_response_count:1 size:635"}
{"level":"info","ts":"2025-12-16T02:28:41.426053Z","caller":"traceutil/trace.go:172","msg":"trace[1627952095] range","detail":"{range_begin:/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account; range_end:; response_count:1; response_revision:1432; }","duration":"352.515496ms","start":"2025-12-16T02:28:41.073530Z","end":"2025-12-16T02:28:41.426045Z","steps":["trace[1627952095] 'range keys from in-memory index tree' (duration: 352.316672ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-16T02:28:41.426086Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-16T02:28:41.073516Z","time spent":"352.560408ms","remote":"127.0.0.1:48062","response type":"/etcdserverpb.KV/Range","request count":0,"request size":87,"response count":1,"response size":658,"request content":"key:\"/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account\" limit:1 "}
{"level":"warn","ts":"2025-12-16T02:28:41.427187Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"254.30788ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-16T02:28:41.428379Z","caller":"traceutil/trace.go:172","msg":"trace[1775890357] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1432; }","duration":"255.566134ms","start":"2025-12-16T02:28:41.172804Z","end":"2025-12-16T02:28:41.428370Z","steps":["trace[1775890357] 'agreement among raft nodes before linearized reading' (duration: 252.699212ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-16T02:28:41.428561Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"186.764337ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotcontents\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-16T02:28:41.428597Z","caller":"traceutil/trace.go:172","msg":"trace[735438485] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotcontents; range_end:; response_count:0; response_revision:1433; }","duration":"186.803017ms","start":"2025-12-16T02:28:41.241786Z","end":"2025-12-16T02:28:41.428589Z","steps":["trace[735438485] 'agreement among raft nodes before linearized reading' (duration: 186.620629ms)"],"step_count":1}
{"level":"info","ts":"2025-12-16T02:28:41.428096Z","caller":"traceutil/trace.go:172","msg":"trace[1823760032] transaction","detail":"{read_only:false; response_revision:1433; number_of_response:1; }","duration":"291.079186ms","start":"2025-12-16T02:28:41.137009Z","end":"2025-12-16T02:28:41.428088Z","steps":["trace[1823760032] 'process raft request' (duration: 288.538653ms)"],"step_count":1}
{"level":"info","ts":"2025-12-16T02:28:41.429181Z","caller":"traceutil/trace.go:172","msg":"trace[1322004444] transaction","detail":"{read_only:false; response_revision:1434; number_of_response:1; }","duration":"108.910408ms","start":"2025-12-16T02:28:41.320263Z","end":"2025-12-16T02:28:41.429174Z","steps":["trace[1322004444] 'process raft request' (duration: 108.700295ms)"],"step_count":1}
==> kernel <==
02:31:13 up 5 min, 0 users, load average: 0.40, 0.78, 0.42
Linux addons-703051 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Dec 16 00:48:01 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [fc4ee09f2d08e04fafa64043bf9167661a9fac75aa2828ba1df68e1e9ac9d42d] <==
W1216 02:27:13.061545 1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
E1216 02:28:18.725750 1 conn.go:339] Error on socket receive: read tcp 192.168.39.237:8443->192.168.39.1:39058: use of closed network connection
E1216 02:28:18.909892 1 conn.go:339] Error on socket receive: read tcp 192.168.39.237:8443->192.168.39.1:39086: use of closed network connection
I1216 02:28:28.102185 1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.41.31"}
I1216 02:28:45.840837 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
I1216 02:28:46.006776 1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.44.199"}
I1216 02:29:08.994552 1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I1216 02:29:11.591039 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I1216 02:29:39.078476 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1216 02:29:39.078542 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1216 02:29:39.105637 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1216 02:29:39.105736 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1216 02:29:39.116969 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1216 02:29:39.117012 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1216 02:29:39.136265 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1216 02:29:39.136346 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1216 02:29:39.162163 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1216 02:29:39.162398 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
E1216 02:29:40.051521 1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
W1216 02:29:40.117313 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
E1216 02:29:40.130969 1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
W1216 02:29:40.163097 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W1216 02:29:40.183753 1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
E1216 02:29:40.278886 1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
I1216 02:31:12.203132 1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.251.56"}
==> kube-controller-manager [960abe7ee91b99d086a9808387f532e974c02cfe22515b93b199b699e1435675] <==
E1216 02:29:44.864267 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1216 02:29:47.652846 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1216 02:29:47.653807 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1216 02:29:48.250007 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1216 02:29:48.250990 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1216 02:29:50.035109 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1216 02:29:50.036207 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1216 02:29:55.173398 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1216 02:29:55.174806 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1216 02:29:56.110857 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1216 02:29:56.111861 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1216 02:30:01.712447 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1216 02:30:01.713282 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1216 02:30:10.129761 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1216 02:30:10.130744 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1216 02:30:15.996370 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1216 02:30:15.997425 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1216 02:30:20.374642 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1216 02:30:20.376199 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1216 02:30:45.958258 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1216 02:30:45.959518 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1216 02:30:53.413869 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1216 02:30:53.414882 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1216 02:30:56.228862 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1216 02:30:56.229778 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
==> kube-proxy [1947cc0b3ab5e45d542ffca511b910b70e9b09ab19381d77587d1ffa064d6217] <==
I1216 02:26:45.647536 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1216 02:26:45.748830 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1216 02:26:45.748912 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.237"]
E1216 02:26:45.749024 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1216 02:26:46.051271 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1216 02:26:46.051340 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1216 02:26:46.051373 1 server_linux.go:132] "Using iptables Proxier"
I1216 02:26:46.068175 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1216 02:26:46.081747 1 server.go:527] "Version info" version="v1.34.2"
I1216 02:26:46.081782 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1216 02:26:46.088951 1 config.go:200] "Starting service config controller"
I1216 02:26:46.088981 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1216 02:26:46.088996 1 config.go:106] "Starting endpoint slice config controller"
I1216 02:26:46.089000 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1216 02:26:46.089009 1 config.go:403] "Starting serviceCIDR config controller"
I1216 02:26:46.089012 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1216 02:26:46.094399 1 config.go:309] "Starting node config controller"
I1216 02:26:46.094426 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1216 02:26:46.094433 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1216 02:26:46.189999 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1216 02:26:46.190023 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1216 02:26:46.190069 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-scheduler [5f075f2bc2541e1114ca94b40540568b39a2ac8c5851d908739adbca47426b40] <==
E1216 02:26:35.093106 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1216 02:26:35.093247 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1216 02:26:35.093352 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1216 02:26:35.093441 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1216 02:26:35.093469 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1216 02:26:35.093506 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1216 02:26:35.093911 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1216 02:26:35.093944 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1216 02:26:35.093777 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1216 02:26:35.900519 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1216 02:26:35.903807 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1216 02:26:35.908156 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1216 02:26:35.908228 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1216 02:26:35.917828 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1216 02:26:35.935984 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1216 02:26:36.109950 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1216 02:26:36.139191 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1216 02:26:36.174121 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1216 02:26:36.243917 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1216 02:26:36.248153 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1216 02:26:36.288186 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1216 02:26:36.304473 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1216 02:26:36.357548 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1216 02:26:36.362967 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
I1216 02:26:38.477728 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Dec 16 02:29:42 addons-703051 kubelet[1505]: I1216 02:29:42.315346 1505 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa8d34799b67e535cc6e8c3d14ce00e6d4b88d599323ff84e50f1e089eb02bf4"} err="failed to get container status \"aa8d34799b67e535cc6e8c3d14ce00e6d4b88d599323ff84e50f1e089eb02bf4\": rpc error: code = NotFound desc = could not find container \"aa8d34799b67e535cc6e8c3d14ce00e6d4b88d599323ff84e50f1e089eb02bf4\": container with ID starting with aa8d34799b67e535cc6e8c3d14ce00e6d4b88d599323ff84e50f1e089eb02bf4 not found: ID does not exist"
Dec 16 02:29:42 addons-703051 kubelet[1505]: I1216 02:29:42.777927 1505 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="023c04c8-d489-4194-8bf8-2f64df0827e2" path="/var/lib/kubelet/pods/023c04c8-d489-4194-8bf8-2f64df0827e2/volumes"
Dec 16 02:29:42 addons-703051 kubelet[1505]: I1216 02:29:42.778292 1505 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a6fa29b-fa31-4375-aeed-182a5dd53b2e" path="/var/lib/kubelet/pods/1a6fa29b-fa31-4375-aeed-182a5dd53b2e/volumes"
Dec 16 02:29:42 addons-703051 kubelet[1505]: I1216 02:29:42.778773 1505 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3416c953-7a2b-4c86-b00e-d9bd8a5a3cbd" path="/var/lib/kubelet/pods/3416c953-7a2b-4c86-b00e-d9bd8a5a3cbd/volumes"
Dec 16 02:29:49 addons-703051 kubelet[1505]: E1216 02:29:49.167104 1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765852189165635244 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 02:29:49 addons-703051 kubelet[1505]: E1216 02:29:49.167129 1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765852189165635244 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 02:29:59 addons-703051 kubelet[1505]: E1216 02:29:59.169625 1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765852199169032919 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 02:29:59 addons-703051 kubelet[1505]: E1216 02:29:59.169656 1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765852199169032919 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 02:30:09 addons-703051 kubelet[1505]: E1216 02:30:09.172590 1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765852209172164163 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 02:30:09 addons-703051 kubelet[1505]: E1216 02:30:09.172631 1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765852209172164163 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 02:30:19 addons-703051 kubelet[1505]: E1216 02:30:19.175266 1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765852219174803660 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 02:30:19 addons-703051 kubelet[1505]: E1216 02:30:19.175291 1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765852219174803660 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 02:30:29 addons-703051 kubelet[1505]: E1216 02:30:29.178082 1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765852229177595506 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 02:30:29 addons-703051 kubelet[1505]: E1216 02:30:29.178116 1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765852229177595506 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 02:30:39 addons-703051 kubelet[1505]: E1216 02:30:39.182213 1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765852239181479359 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 02:30:39 addons-703051 kubelet[1505]: E1216 02:30:39.182250 1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765852239181479359 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 02:30:40 addons-703051 kubelet[1505]: I1216 02:30:40.775510 1505 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-4fpsx" secret="" err="secret \"gcp-auth\" not found"
Dec 16 02:30:49 addons-703051 kubelet[1505]: E1216 02:30:49.184891 1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765852249184538291 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 02:30:49 addons-703051 kubelet[1505]: E1216 02:30:49.184915 1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765852249184538291 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 02:30:54 addons-703051 kubelet[1505]: I1216 02:30:54.773932 1505 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Dec 16 02:30:59 addons-703051 kubelet[1505]: E1216 02:30:59.187783 1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765852259187199865 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 02:30:59 addons-703051 kubelet[1505]: E1216 02:30:59.187820 1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765852259187199865 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 02:31:09 addons-703051 kubelet[1505]: E1216 02:31:09.191641 1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765852269190957268 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 02:31:09 addons-703051 kubelet[1505]: E1216 02:31:09.191899 1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765852269190957268 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 16 02:31:12 addons-703051 kubelet[1505]: I1216 02:31:12.227093 1505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvm7w\" (UniqueName: \"kubernetes.io/projected/2fc9da29-e194-4963-9517-d1288ba2b8a8-kube-api-access-nvm7w\") pod \"hello-world-app-5d498dc89-8b4zv\" (UID: \"2fc9da29-e194-4963-9517-d1288ba2b8a8\") " pod="default/hello-world-app-5d498dc89-8b4zv"
==> storage-provisioner [c68acdb398858e7f163118bca702ed13a99eefaee99e6d8be83f3af1ec90af7b] <==
W1216 02:30:48.793908 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 02:30:50.798878 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 02:30:50.805341 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 02:30:52.808874 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 02:30:52.813357 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 02:30:54.816428 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 02:30:54.823498 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 02:30:56.826745 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 02:30:56.831605 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 02:30:58.835449 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 02:30:58.845545 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 02:31:00.849389 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 02:31:00.856037 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 02:31:02.859070 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 02:31:02.866177 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 02:31:04.870051 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 02:31:04.874585 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 02:31:06.877388 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 02:31:06.884105 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 02:31:08.887384 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 02:31:08.892257 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 02:31:10.895849 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 02:31:10.902594 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 02:31:12.907420 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1216 02:31:12.913367 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-703051 -n addons-703051
helpers_test.go:270: (dbg) Run: kubectl --context addons-703051 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-8b4zv ingress-nginx-admission-create-vvbgk ingress-nginx-admission-patch-srpnh
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run: kubectl --context addons-703051 describe pod hello-world-app-5d498dc89-8b4zv ingress-nginx-admission-create-vvbgk ingress-nginx-admission-patch-srpnh
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-703051 describe pod hello-world-app-5d498dc89-8b4zv ingress-nginx-admission-create-vvbgk ingress-nginx-admission-patch-srpnh: exit status 1 (70.949435ms)
-- stdout --
Name: hello-world-app-5d498dc89-8b4zv
Namespace: default
Priority: 0
Service Account: default
Node: addons-703051/192.168.39.237
Start Time: Tue, 16 Dec 2025 02:31:12 +0000
Labels: app=hello-world-app
pod-template-hash=5d498dc89
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-world-app-5d498dc89
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nvm7w (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-nvm7w:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2s default-scheduler Successfully assigned default/hello-world-app-5d498dc89-8b4zv to addons-703051
Normal Pulling 2s kubelet Pulling image "docker.io/kicbase/echo-server:1.0"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-vvbgk" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-srpnh" not found
** /stderr **
helpers_test.go:288: kubectl --context addons-703051 describe pod hello-world-app-5d498dc89-8b4zv ingress-nginx-admission-create-vvbgk ingress-nginx-admission-patch-srpnh: exit status 1
addons_test.go:1055: (dbg) Run: out/minikube-linux-amd64 -p addons-703051 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-703051 addons disable ingress-dns --alsologtostderr -v=1: (1.727309562s)
addons_test.go:1055: (dbg) Run: out/minikube-linux-amd64 -p addons-703051 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-703051 addons disable ingress --alsologtostderr -v=1: (7.69950441s)
--- FAIL: TestAddons/parallel/Ingress (158.06s)