=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run: kubectl --context addons-410268 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run: kubectl --context addons-410268 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run: kubectl --context addons-410268 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [d8c813f3-2dd2-444d-88d8-fe297f907413] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [d8c813f3-2dd2-444d-88d8-fe297f907413] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004654079s
I1217 11:18:06.404834 1349907 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run: out/minikube-linux-amd64 -p addons-410268 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-410268 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.338476471s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run: kubectl --context addons-410268 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run: out/minikube-linux-amd64 -p addons-410268 ip
addons_test.go:301: (dbg) Run: nslookup hello-john.test 192.168.39.28
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-410268 -n addons-410268
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-amd64 -p addons-410268 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-410268 logs -n 25: (1.099077831s)
helpers_test.go:261: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ delete │ -p download-only-783543 │ download-only-783543 │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │ 17 Dec 25 11:15 UTC │
│ start │ --download-only -p binary-mirror-200790 --alsologtostderr --binary-mirror http://127.0.0.1:44955 --driver=kvm2 --container-runtime=crio │ binary-mirror-200790 │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │ │
│ delete │ -p binary-mirror-200790 │ binary-mirror-200790 │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │ 17 Dec 25 11:15 UTC │
│ addons │ disable dashboard -p addons-410268 │ addons-410268 │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │ │
│ addons │ enable dashboard -p addons-410268 │ addons-410268 │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │ │
│ start │ -p addons-410268 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-410268 │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │ 17 Dec 25 11:17 UTC │
│ addons │ addons-410268 addons disable volcano --alsologtostderr -v=1 │ addons-410268 │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │ 17 Dec 25 11:17 UTC │
│ addons │ addons-410268 addons disable gcp-auth --alsologtostderr -v=1 │ addons-410268 │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │ 17 Dec 25 11:17 UTC │
│ addons │ enable headlamp -p addons-410268 --alsologtostderr -v=1 │ addons-410268 │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │ 17 Dec 25 11:17 UTC │
│ addons │ addons-410268 addons disable metrics-server --alsologtostderr -v=1 │ addons-410268 │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │ 17 Dec 25 11:17 UTC │
│ addons │ addons-410268 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-410268 │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │ 17 Dec 25 11:17 UTC │
│ addons │ addons-410268 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-410268 │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │ 17 Dec 25 11:18 UTC │
│ addons │ addons-410268 addons disable headlamp --alsologtostderr -v=1 │ addons-410268 │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │ 17 Dec 25 11:18 UTC │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-410268 │ addons-410268 │ jenkins │ v1.37.0 │ 17 Dec 25 11:18 UTC │ 17 Dec 25 11:18 UTC │
│ addons │ addons-410268 addons disable registry-creds --alsologtostderr -v=1 │ addons-410268 │ jenkins │ v1.37.0 │ 17 Dec 25 11:18 UTC │ 17 Dec 25 11:18 UTC │
│ ip │ addons-410268 ip │ addons-410268 │ jenkins │ v1.37.0 │ 17 Dec 25 11:18 UTC │ 17 Dec 25 11:18 UTC │
│ addons │ addons-410268 addons disable registry --alsologtostderr -v=1 │ addons-410268 │ jenkins │ v1.37.0 │ 17 Dec 25 11:18 UTC │ 17 Dec 25 11:18 UTC │
│ ssh │ addons-410268 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-410268 │ jenkins │ v1.37.0 │ 17 Dec 25 11:18 UTC │ │
│ addons │ addons-410268 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-410268 │ jenkins │ v1.37.0 │ 17 Dec 25 11:18 UTC │ 17 Dec 25 11:18 UTC │
│ addons │ addons-410268 addons disable yakd --alsologtostderr -v=1 │ addons-410268 │ jenkins │ v1.37.0 │ 17 Dec 25 11:18 UTC │ 17 Dec 25 11:18 UTC │
│ ssh │ addons-410268 ssh cat /opt/local-path-provisioner/pvc-b4fbc5e0-3297-44da-8635-bcba4bc247bc_default_test-pvc/file1 │ addons-410268 │ jenkins │ v1.37.0 │ 17 Dec 25 11:18 UTC │ 17 Dec 25 11:18 UTC │
│ addons │ addons-410268 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-410268 │ jenkins │ v1.37.0 │ 17 Dec 25 11:18 UTC │ 17 Dec 25 11:19 UTC │
│ addons │ addons-410268 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-410268 │ jenkins │ v1.37.0 │ 17 Dec 25 11:18 UTC │ 17 Dec 25 11:18 UTC │
│ addons │ addons-410268 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-410268 │ jenkins │ v1.37.0 │ 17 Dec 25 11:18 UTC │ 17 Dec 25 11:18 UTC │
│ ip │ addons-410268 ip │ addons-410268 │ jenkins │ v1.37.0 │ 17 Dec 25 11:20 UTC │ 17 Dec 25 11:20 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/17 11:15:20
Running on machine: ubuntu-20-agent-10
Binary: Built with gc go1.25.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1217 11:15:20.592142 1350845 out.go:360] Setting OutFile to fd 1 ...
I1217 11:15:20.592433 1350845 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:15:20.592444 1350845 out.go:374] Setting ErrFile to fd 2...
I1217 11:15:20.592449 1350845 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:15:20.592624 1350845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
I1217 11:15:20.593159 1350845 out.go:368] Setting JSON to false
I1217 11:15:20.594100 1350845 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":17860,"bootTime":1765952261,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1217 11:15:20.594163 1350845 start.go:143] virtualization: kvm guest
I1217 11:15:20.596159 1350845 out.go:179] * [addons-410268] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1217 11:15:20.597382 1350845 out.go:179] - MINIKUBE_LOCATION=21808
I1217 11:15:20.597414 1350845 notify.go:221] Checking for updates...
I1217 11:15:20.599656 1350845 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1217 11:15:20.600910 1350845 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21808-1345916/kubeconfig
I1217 11:15:20.602114 1350845 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1345916/.minikube
I1217 11:15:20.603268 1350845 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1217 11:15:20.604451 1350845 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1217 11:15:20.605844 1350845 driver.go:422] Setting default libvirt URI to qemu:///system
I1217 11:15:20.638414 1350845 out.go:179] * Using the kvm2 driver based on user configuration
I1217 11:15:20.639570 1350845 start.go:309] selected driver: kvm2
I1217 11:15:20.639588 1350845 start.go:927] validating driver "kvm2" against <nil>
I1217 11:15:20.639600 1350845 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1217 11:15:20.640355 1350845 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1217 11:15:20.640604 1350845 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1217 11:15:20.640636 1350845 cni.go:84] Creating CNI manager for ""
I1217 11:15:20.640679 1350845 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1217 11:15:20.640689 1350845 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1217 11:15:20.640737 1350845 start.go:353] cluster config:
{Name:addons-410268 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-410268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
I1217 11:15:20.640872 1350845 iso.go:125] acquiring lock: {Name:mkf3f94e126ae38d32753ef0086ea24e79e9b483 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1217 11:15:20.642225 1350845 out.go:179] * Starting "addons-410268" primary control-plane node in "addons-410268" cluster
I1217 11:15:20.643270 1350845 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
I1217 11:15:20.643299 1350845 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
I1217 11:15:20.643310 1350845 cache.go:65] Caching tarball of preloaded images
I1217 11:15:20.643417 1350845 preload.go:238] Found /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1217 11:15:20.643428 1350845 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
I1217 11:15:20.643719 1350845 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/config.json ...
I1217 11:15:20.643744 1350845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/config.json: {Name:mk3d1e0e95208bc322d19bb9e866aad356f15d5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 11:15:20.643886 1350845 start.go:360] acquireMachinesLock for addons-410268: {Name:mk7c4b33009a84629d0b15fa1b2a158ad55cf3fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1217 11:15:20.643942 1350845 start.go:364] duration metric: took 40.602µs to acquireMachinesLock for "addons-410268"
I1217 11:15:20.643963 1350845 start.go:93] Provisioning new machine with config: &{Name:addons-410268 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:addons-410268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
I1217 11:15:20.644059 1350845 start.go:125] createHost starting for "" (driver="kvm2")
I1217 11:15:20.645460 1350845 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1217 11:15:20.645637 1350845 start.go:159] libmachine.API.Create for "addons-410268" (driver="kvm2")
I1217 11:15:20.645668 1350845 client.go:173] LocalClient.Create starting
I1217 11:15:20.645763 1350845 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem
I1217 11:15:20.743070 1350845 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem
I1217 11:15:20.891945 1350845 main.go:143] libmachine: creating domain...
I1217 11:15:20.891971 1350845 main.go:143] libmachine: creating network...
I1217 11:15:20.893505 1350845 main.go:143] libmachine: found existing default network
I1217 11:15:20.893712 1350845 main.go:143] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1217 11:15:20.894311 1350845 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ce6b00}
I1217 11:15:20.894432 1350845 main.go:143] libmachine: defining private network:
<network>
<name>mk-addons-410268</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1217 11:15:20.901303 1350845 main.go:143] libmachine: creating private network mk-addons-410268 192.168.39.0/24...
I1217 11:15:20.981162 1350845 main.go:143] libmachine: private network mk-addons-410268 192.168.39.0/24 created
I1217 11:15:20.981449 1350845 main.go:143] libmachine: <network>
<name>mk-addons-410268</name>
<uuid>c43dccdc-462d-4763-a28d-df275fc6897f</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:2d:d3:cb'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1217 11:15:20.981476 1350845 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268 ...
I1217 11:15:20.981501 1350845 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21808-1345916/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso
I1217 11:15:20.981512 1350845 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21808-1345916/.minikube
I1217 11:15:20.981587 1350845 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21808-1345916/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso...
I1217 11:15:21.304539 1350845 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa...
I1217 11:15:21.406148 1350845 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/addons-410268.rawdisk...
I1217 11:15:21.406198 1350845 main.go:143] libmachine: Writing magic tar header
I1217 11:15:21.406220 1350845 main.go:143] libmachine: Writing SSH key tar header
I1217 11:15:21.406301 1350845 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268 ...
I1217 11:15:21.406375 1350845 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268
I1217 11:15:21.406428 1350845 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268 (perms=drwx------)
I1217 11:15:21.406451 1350845 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21808-1345916/.minikube/machines
I1217 11:15:21.406463 1350845 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21808-1345916/.minikube/machines (perms=drwxr-xr-x)
I1217 11:15:21.406476 1350845 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21808-1345916/.minikube
I1217 11:15:21.406487 1350845 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21808-1345916/.minikube (perms=drwxr-xr-x)
I1217 11:15:21.406498 1350845 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21808-1345916
I1217 11:15:21.406508 1350845 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21808-1345916 (perms=drwxrwxr-x)
I1217 11:15:21.406517 1350845 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1217 11:15:21.406544 1350845 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1217 11:15:21.406556 1350845 main.go:143] libmachine: checking permissions on dir: /home/jenkins
I1217 11:15:21.406564 1350845 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1217 11:15:21.406575 1350845 main.go:143] libmachine: checking permissions on dir: /home
I1217 11:15:21.406585 1350845 main.go:143] libmachine: skipping /home - not owner
I1217 11:15:21.406589 1350845 main.go:143] libmachine: defining domain...
I1217 11:15:21.407959 1350845 main.go:143] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-410268</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/addons-410268.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-410268'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1217 11:15:21.414288 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:b5:5c:a9 in network default
I1217 11:15:21.414861 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:21.414878 1350845 main.go:143] libmachine: starting domain...
I1217 11:15:21.414883 1350845 main.go:143] libmachine: ensuring networks are active...
I1217 11:15:21.415719 1350845 main.go:143] libmachine: Ensuring network default is active
I1217 11:15:21.416183 1350845 main.go:143] libmachine: Ensuring network mk-addons-410268 is active
I1217 11:15:21.416757 1350845 main.go:143] libmachine: getting domain XML...
I1217 11:15:21.417848 1350845 main.go:143] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-410268</name>
<uuid>7773aa72-69d0-4e14-8c7e-331a57e11558</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/addons-410268.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:35:b5:14'/>
<source network='mk-addons-410268'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:b5:5c:a9'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1217 11:15:22.749802 1350845 main.go:143] libmachine: waiting for domain to start...
I1217 11:15:22.751357 1350845 main.go:143] libmachine: domain is now running
I1217 11:15:22.751378 1350845 main.go:143] libmachine: waiting for IP...
I1217 11:15:22.752321 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:22.752962 1350845 main.go:143] libmachine: no network interface addresses found for domain addons-410268 (source=lease)
I1217 11:15:22.752977 1350845 main.go:143] libmachine: trying to list again with source=arp
I1217 11:15:22.753289 1350845 main.go:143] libmachine: unable to find current IP address of domain addons-410268 in network mk-addons-410268 (interfaces detected: [])
I1217 11:15:22.753337 1350845 retry.go:31] will retry after 261.058009ms: waiting for domain to come up
I1217 11:15:23.016027 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:23.016778 1350845 main.go:143] libmachine: no network interface addresses found for domain addons-410268 (source=lease)
I1217 11:15:23.016795 1350845 main.go:143] libmachine: trying to list again with source=arp
I1217 11:15:23.017150 1350845 main.go:143] libmachine: unable to find current IP address of domain addons-410268 in network mk-addons-410268 (interfaces detected: [])
I1217 11:15:23.017195 1350845 retry.go:31] will retry after 249.311618ms: waiting for domain to come up
I1217 11:15:23.268053 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:23.268786 1350845 main.go:143] libmachine: no network interface addresses found for domain addons-410268 (source=lease)
I1217 11:15:23.268819 1350845 main.go:143] libmachine: trying to list again with source=arp
I1217 11:15:23.269192 1350845 main.go:143] libmachine: unable to find current IP address of domain addons-410268 in network mk-addons-410268 (interfaces detected: [])
I1217 11:15:23.269245 1350845 retry.go:31] will retry after 438.21381ms: waiting for domain to come up
I1217 11:15:23.709022 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:23.709527 1350845 main.go:143] libmachine: no network interface addresses found for domain addons-410268 (source=lease)
I1217 11:15:23.709546 1350845 main.go:143] libmachine: trying to list again with source=arp
I1217 11:15:23.709933 1350845 main.go:143] libmachine: unable to find current IP address of domain addons-410268 in network mk-addons-410268 (interfaces detected: [])
I1217 11:15:23.709970 1350845 retry.go:31] will retry after 605.827989ms: waiting for domain to come up
I1217 11:15:24.317961 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:24.318552 1350845 main.go:143] libmachine: no network interface addresses found for domain addons-410268 (source=lease)
I1217 11:15:24.318574 1350845 main.go:143] libmachine: trying to list again with source=arp
I1217 11:15:24.318975 1350845 main.go:143] libmachine: unable to find current IP address of domain addons-410268 in network mk-addons-410268 (interfaces detected: [])
I1217 11:15:24.319028 1350845 retry.go:31] will retry after 647.608813ms: waiting for domain to come up
I1217 11:15:24.967974 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:24.968640 1350845 main.go:143] libmachine: no network interface addresses found for domain addons-410268 (source=lease)
I1217 11:15:24.968680 1350845 main.go:143] libmachine: trying to list again with source=arp
I1217 11:15:24.969073 1350845 main.go:143] libmachine: unable to find current IP address of domain addons-410268 in network mk-addons-410268 (interfaces detected: [])
I1217 11:15:24.969123 1350845 retry.go:31] will retry after 765.154906ms: waiting for domain to come up
I1217 11:15:25.735950 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:25.736567 1350845 main.go:143] libmachine: no network interface addresses found for domain addons-410268 (source=lease)
I1217 11:15:25.736581 1350845 main.go:143] libmachine: trying to list again with source=arp
I1217 11:15:25.736902 1350845 main.go:143] libmachine: unable to find current IP address of domain addons-410268 in network mk-addons-410268 (interfaces detected: [])
I1217 11:15:25.736944 1350845 retry.go:31] will retry after 858.001615ms: waiting for domain to come up
I1217 11:15:26.597164 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:26.597750 1350845 main.go:143] libmachine: no network interface addresses found for domain addons-410268 (source=lease)
I1217 11:15:26.597767 1350845 main.go:143] libmachine: trying to list again with source=arp
I1217 11:15:26.598173 1350845 main.go:143] libmachine: unable to find current IP address of domain addons-410268 in network mk-addons-410268 (interfaces detected: [])
I1217 11:15:26.598218 1350845 retry.go:31] will retry after 1.003617568s: waiting for domain to come up
I1217 11:15:27.603763 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:27.604426 1350845 main.go:143] libmachine: no network interface addresses found for domain addons-410268 (source=lease)
I1217 11:15:27.604454 1350845 main.go:143] libmachine: trying to list again with source=arp
I1217 11:15:27.604903 1350845 main.go:143] libmachine: unable to find current IP address of domain addons-410268 in network mk-addons-410268 (interfaces detected: [])
I1217 11:15:27.604951 1350845 retry.go:31] will retry after 1.483692995s: waiting for domain to come up
I1217 11:15:29.090763 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:29.091460 1350845 main.go:143] libmachine: no network interface addresses found for domain addons-410268 (source=lease)
I1217 11:15:29.091475 1350845 main.go:143] libmachine: trying to list again with source=arp
I1217 11:15:29.091852 1350845 main.go:143] libmachine: unable to find current IP address of domain addons-410268 in network mk-addons-410268 (interfaces detected: [])
I1217 11:15:29.091945 1350845 retry.go:31] will retry after 2.269901769s: waiting for domain to come up
I1217 11:15:31.363369 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:31.364044 1350845 main.go:143] libmachine: no network interface addresses found for domain addons-410268 (source=lease)
I1217 11:15:31.364076 1350845 main.go:143] libmachine: trying to list again with source=arp
I1217 11:15:31.364462 1350845 main.go:143] libmachine: unable to find current IP address of domain addons-410268 in network mk-addons-410268 (interfaces detected: [])
I1217 11:15:31.364511 1350845 retry.go:31] will retry after 2.857776026s: waiting for domain to come up
I1217 11:15:34.225497 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:34.226028 1350845 main.go:143] libmachine: no network interface addresses found for domain addons-410268 (source=lease)
I1217 11:15:34.226044 1350845 main.go:143] libmachine: trying to list again with source=arp
I1217 11:15:34.226371 1350845 main.go:143] libmachine: unable to find current IP address of domain addons-410268 in network mk-addons-410268 (interfaces detected: [])
I1217 11:15:34.226407 1350845 retry.go:31] will retry after 2.523641006s: waiting for domain to come up
I1217 11:15:36.752165 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:36.752781 1350845 main.go:143] libmachine: domain addons-410268 has current primary IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:36.752799 1350845 main.go:143] libmachine: found domain IP: 192.168.39.28
I1217 11:15:36.752808 1350845 main.go:143] libmachine: reserving static IP address...
I1217 11:15:36.753293 1350845 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-410268", mac: "52:54:00:35:b5:14", ip: "192.168.39.28"} in network mk-addons-410268
I1217 11:15:36.966773 1350845 main.go:143] libmachine: reserved static IP address 192.168.39.28 for domain addons-410268
I1217 11:15:36.966811 1350845 main.go:143] libmachine: waiting for SSH...
I1217 11:15:36.966817 1350845 main.go:143] libmachine: Getting to WaitForSSH function...
I1217 11:15:36.969915 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:36.970400 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:minikube Clientid:01:52:54:00:35:b5:14}
I1217 11:15:36.970429 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:36.970624 1350845 main.go:143] libmachine: Using SSH client type: native
I1217 11:15:36.970842 1350845 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.28 22 <nil> <nil>}
I1217 11:15:36.970855 1350845 main.go:143] libmachine: About to run SSH command:
exit 0
I1217 11:15:37.083230 1350845 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1217 11:15:37.083690 1350845 main.go:143] libmachine: domain creation complete
I1217 11:15:37.085486 1350845 machine.go:94] provisionDockerMachine start ...
I1217 11:15:37.087875 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:37.088405 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:15:37.088442 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:37.088645 1350845 main.go:143] libmachine: Using SSH client type: native
I1217 11:15:37.088901 1350845 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.28 22 <nil> <nil>}
I1217 11:15:37.088919 1350845 main.go:143] libmachine: About to run SSH command:
hostname
I1217 11:15:37.199594 1350845 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
I1217 11:15:37.199630 1350845 buildroot.go:166] provisioning hostname "addons-410268"
I1217 11:15:37.202546 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:37.202898 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:15:37.202934 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:37.203153 1350845 main.go:143] libmachine: Using SSH client type: native
I1217 11:15:37.203362 1350845 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.28 22 <nil> <nil>}
I1217 11:15:37.203373 1350845 main.go:143] libmachine: About to run SSH command:
sudo hostname addons-410268 && echo "addons-410268" | sudo tee /etc/hostname
I1217 11:15:37.330802 1350845 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-410268
I1217 11:15:37.333935 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:37.334380 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:15:37.334414 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:37.334576 1350845 main.go:143] libmachine: Using SSH client type: native
I1217 11:15:37.334827 1350845 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.28 22 <nil> <nil>}
I1217 11:15:37.334847 1350845 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-410268' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-410268/g' /etc/hosts;
else
echo '127.0.1.1 addons-410268' | sudo tee -a /etc/hosts;
fi
fi
I1217 11:15:37.457116 1350845 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1217 11:15:37.457148 1350845 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21808-1345916/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-1345916/.minikube}
I1217 11:15:37.457175 1350845 buildroot.go:174] setting up certificates
I1217 11:15:37.457199 1350845 provision.go:84] configureAuth start
I1217 11:15:37.460061 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:37.460552 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:15:37.460593 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:37.462938 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:37.463326 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:15:37.463353 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:37.463519 1350845 provision.go:143] copyHostCerts
I1217 11:15:37.463618 1350845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.pem (1082 bytes)
I1217 11:15:37.463893 1350845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-1345916/.minikube/cert.pem (1123 bytes)
I1217 11:15:37.464097 1350845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-1345916/.minikube/key.pem (1675 bytes)
I1217 11:15:37.464206 1350845 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem org=jenkins.addons-410268 san=[127.0.0.1 192.168.39.28 addons-410268 localhost minikube]
I1217 11:15:37.539990 1350845 provision.go:177] copyRemoteCerts
I1217 11:15:37.540073 1350845 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1217 11:15:37.542671 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:37.543086 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:15:37.543114 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:37.543273 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
I1217 11:15:37.629316 1350845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1217 11:15:37.655732 1350845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1217 11:15:37.681614 1350845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1217 11:15:37.708265 1350845 provision.go:87] duration metric: took 251.048973ms to configureAuth
I1217 11:15:37.708292 1350845 buildroot.go:189] setting minikube options for container-runtime
I1217 11:15:37.708493 1350845 config.go:182] Loaded profile config "addons-410268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 11:15:37.711190 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:37.711570 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:15:37.711604 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:37.711764 1350845 main.go:143] libmachine: Using SSH client type: native
I1217 11:15:37.711995 1350845 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.28 22 <nil> <nil>}
I1217 11:15:37.712015 1350845 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1217 11:15:37.948745 1350845 main.go:143] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1217 11:15:37.948780 1350845 machine.go:97] duration metric: took 863.272475ms to provisionDockerMachine
I1217 11:15:37.948795 1350845 client.go:176] duration metric: took 17.303118011s to LocalClient.Create
I1217 11:15:37.948819 1350845 start.go:167] duration metric: took 17.303180981s to libmachine.API.Create "addons-410268"
I1217 11:15:37.948827 1350845 start.go:293] postStartSetup for "addons-410268" (driver="kvm2")
I1217 11:15:37.948849 1350845 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1217 11:15:37.948938 1350845 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1217 11:15:37.952476 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:37.953079 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:15:37.953109 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:37.953318 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
I1217 11:15:38.040194 1350845 ssh_runner.go:195] Run: cat /etc/os-release
I1217 11:15:38.045641 1350845 info.go:137] Remote host: Buildroot 2025.02
I1217 11:15:38.045671 1350845 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1345916/.minikube/addons for local assets ...
I1217 11:15:38.045768 1350845 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1345916/.minikube/files for local assets ...
I1217 11:15:38.045805 1350845 start.go:296] duration metric: took 96.971166ms for postStartSetup
I1217 11:15:38.048732 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:38.049146 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:15:38.049179 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:38.049388 1350845 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/config.json ...
I1217 11:15:38.049575 1350845 start.go:128] duration metric: took 17.405503986s to createHost
I1217 11:15:38.052486 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:38.053692 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:15:38.053720 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:38.053905 1350845 main.go:143] libmachine: Using SSH client type: native
I1217 11:15:38.054105 1350845 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.28 22 <nil> <nil>}
I1217 11:15:38.054127 1350845 main.go:143] libmachine: About to run SSH command:
date +%s.%N
I1217 11:15:38.170852 1350845 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765970138.128366888
I1217 11:15:38.170878 1350845 fix.go:216] guest clock: 1765970138.128366888
I1217 11:15:38.170888 1350845 fix.go:229] Guest: 2025-12-17 11:15:38.128366888 +0000 UTC Remote: 2025-12-17 11:15:38.049587758 +0000 UTC m=+17.508960444 (delta=78.77913ms)
I1217 11:15:38.170911 1350845 fix.go:200] guest clock delta is within tolerance: 78.77913ms
I1217 11:15:38.170918 1350845 start.go:83] releasing machines lock for "addons-410268", held for 17.526964872s
I1217 11:15:38.173738 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:38.174192 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:15:38.174227 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:38.174468 1350845 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem (1675 bytes)
I1217 11:15:38.174512 1350845 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem (1082 bytes)
I1217 11:15:38.174539 1350845 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem (1123 bytes)
I1217 11:15:38.174563 1350845 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem (1675 bytes)
W1217 11:15:38.174630 1350845 start.go:789] pre-probe CA setup failed: create ca cert file asset for /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt: stat: stat /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt: no such file or directory
I1217 11:15:38.175059 1350845 ssh_runner.go:195] Run: cat /version.json
I1217 11:15:38.175136 1350845 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1217 11:15:38.178170 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:38.178339 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:38.178602 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:15:38.178626 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:38.178741 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:15:38.178765 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:38.178791 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
I1217 11:15:38.179002 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
I1217 11:15:38.262085 1350845 ssh_runner.go:195] Run: systemctl --version
I1217 11:15:38.295594 1350845 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1217 11:15:38.450340 1350845 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1217 11:15:38.457211 1350845 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1217 11:15:38.457297 1350845 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1217 11:15:38.476258 1350845 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1217 11:15:38.476288 1350845 start.go:496] detecting cgroup driver to use...
I1217 11:15:38.476363 1350845 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1217 11:15:38.495138 1350845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1217 11:15:38.510957 1350845 docker.go:218] disabling cri-docker service (if available) ...
I1217 11:15:38.511041 1350845 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1217 11:15:38.528272 1350845 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1217 11:15:38.543641 1350845 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1217 11:15:38.687153 1350845 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1217 11:15:38.894865 1350845 docker.go:234] disabling docker service ...
I1217 11:15:38.894938 1350845 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1217 11:15:38.910577 1350845 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1217 11:15:38.924533 1350845 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1217 11:15:39.079546 1350845 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1217 11:15:39.216531 1350845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1217 11:15:39.232295 1350845 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1217 11:15:39.253427 1350845 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1217 11:15:39.253562 1350845 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1217 11:15:39.264897 1350845 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1217 11:15:39.264989 1350845 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1217 11:15:39.276277 1350845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1217 11:15:39.287433 1350845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1217 11:15:39.298410 1350845 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1217 11:15:39.310245 1350845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1217 11:15:39.321297 1350845 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1217 11:15:39.340448 1350845 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1217 11:15:39.351627 1350845 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1217 11:15:39.360944 1350845 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1217 11:15:39.361048 1350845 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1217 11:15:39.380732 1350845 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1217 11:15:39.393178 1350845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1217 11:15:39.528134 1350845 ssh_runner.go:195] Run: sudo systemctl restart crio
I1217 11:15:39.641833 1350845 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1217 11:15:39.641931 1350845 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1217 11:15:39.647555 1350845 start.go:564] Will wait 60s for crictl version
I1217 11:15:39.647614 1350845 ssh_runner.go:195] Run: which crictl
I1217 11:15:39.651367 1350845 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1217 11:15:39.682101 1350845 start.go:580] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1217 11:15:39.682201 1350845 ssh_runner.go:195] Run: crio --version
I1217 11:15:39.708561 1350845 ssh_runner.go:195] Run: crio --version
I1217 11:15:39.735661 1350845 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
I1217 11:15:39.739497 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:39.739913 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:15:39.739937 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:15:39.740138 1350845 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1217 11:15:39.744417 1350845 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1217 11:15:39.758566 1350845 kubeadm.go:884] updating cluster {Name:addons-410268 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
3 ClusterName:addons-410268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1217 11:15:39.758708 1350845 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
I1217 11:15:39.758788 1350845 ssh_runner.go:195] Run: sudo crictl images --output json
I1217 11:15:39.786378 1350845 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
I1217 11:15:39.786447 1350845 ssh_runner.go:195] Run: which lz4
I1217 11:15:39.790528 1350845 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1217 11:15:39.794847 1350845 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1217 11:15:39.794883 1350845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340314847 bytes)
I1217 11:15:40.941493 1350845 crio.go:462] duration metric: took 1.150980208s to copy over tarball
I1217 11:15:40.941601 1350845 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1217 11:15:42.406837 1350845 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.465192902s)
I1217 11:15:42.406885 1350845 crio.go:469] duration metric: took 1.465353127s to extract the tarball
I1217 11:15:42.406898 1350845 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1217 11:15:42.442447 1350845 ssh_runner.go:195] Run: sudo crictl images --output json
I1217 11:15:42.480775 1350845 crio.go:514] all images are preloaded for cri-o runtime.
I1217 11:15:42.480799 1350845 cache_images.go:86] Images are preloaded, skipping loading
I1217 11:15:42.480807 1350845 kubeadm.go:935] updating node { 192.168.39.28 8443 v1.34.3 crio true true} ...
I1217 11:15:42.480897 1350845 kubeadm.go:947] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-410268 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.28
[Install]
config:
{KubernetesVersion:v1.34.3 ClusterName:addons-410268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1217 11:15:42.481002 1350845 ssh_runner.go:195] Run: crio config
I1217 11:15:42.525105 1350845 cni.go:84] Creating CNI manager for ""
I1217 11:15:42.525134 1350845 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1217 11:15:42.525156 1350845 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1217 11:15:42.525186 1350845 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.28 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-410268 NodeName:addons-410268 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1217 11:15:42.525314 1350845 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.28
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-410268"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.28"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.28"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1217 11:15:42.525415 1350845 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
I1217 11:15:42.537907 1350845 binaries.go:51] Found k8s binaries, skipping transfer
I1217 11:15:42.538001 1350845 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1217 11:15:42.549456 1350845 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I1217 11:15:42.569830 1350845 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1217 11:15:42.589621 1350845 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
I1217 11:15:42.610114 1350845 ssh_runner.go:195] Run: grep 192.168.39.28 control-plane.minikube.internal$ /etc/hosts
I1217 11:15:42.614161 1350845 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.28 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1217 11:15:42.631453 1350845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1217 11:15:42.777216 1350845 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1217 11:15:42.807755 1350845 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268 for IP: 192.168.39.28
I1217 11:15:42.807787 1350845 certs.go:195] generating shared ca certs ...
I1217 11:15:42.807812 1350845 certs.go:227] acquiring lock for ca certs: {Name:mk7dff4294abcbe4af041891799d61c459798c97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 11:15:42.808016 1350845 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.key
I1217 11:15:42.917178 1350845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt ...
I1217 11:15:42.917218 1350845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt: {Name:mk924e10cdeab37a6839cfe0bd545c6ef1af1151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 11:15:42.917406 1350845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.key ...
I1217 11:15:42.917419 1350845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.key: {Name:mk71344f89d4a5b6338f9f1dcf1de80ad0eb74b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 11:15:42.917493 1350845 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.key
I1217 11:15:42.943644 1350845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.crt ...
I1217 11:15:42.943676 1350845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.crt: {Name:mkac74a4090a4cfd9810679a72eb27b16dcbc70f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 11:15:42.943845 1350845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.key ...
I1217 11:15:42.943857 1350845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.key: {Name:mk8c3ff93ea81b44b2dfb1c45d13eea2b0341cb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 11:15:42.943927 1350845 certs.go:257] generating profile certs ...
I1217 11:15:42.944006 1350845 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.key
I1217 11:15:42.944030 1350845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt with IP's: []
I1217 11:15:43.134502 1350845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt ...
I1217 11:15:43.134538 1350845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: {Name:mk5424d0d2090e412eb1218c16143dc04c000352 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 11:15:43.134770 1350845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.key ...
I1217 11:15:43.134787 1350845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.key: {Name:mk523ac9e78d3d64f6a3a3c09323f75212a30bcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 11:15:43.134916 1350845 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/apiserver.key.8b3aaf35
I1217 11:15:43.134939 1350845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/apiserver.crt.8b3aaf35 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.28]
I1217 11:15:43.173860 1350845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/apiserver.crt.8b3aaf35 ...
I1217 11:15:43.173892 1350845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/apiserver.crt.8b3aaf35: {Name:mk73e9f29099141c309fd594f0cc386347876e61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 11:15:43.174126 1350845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/apiserver.key.8b3aaf35 ...
I1217 11:15:43.174148 1350845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/apiserver.key.8b3aaf35: {Name:mk6f66f6d6fe4b58e3f2eb4739723a42f05d6e5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 11:15:43.174265 1350845 certs.go:382] copying /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/apiserver.crt.8b3aaf35 -> /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/apiserver.crt
I1217 11:15:43.174357 1350845 certs.go:386] copying /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/apiserver.key.8b3aaf35 -> /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/apiserver.key
I1217 11:15:43.174411 1350845 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/proxy-client.key
I1217 11:15:43.174437 1350845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/proxy-client.crt with IP's: []
I1217 11:15:43.329546 1350845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/proxy-client.crt ...
I1217 11:15:43.329581 1350845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/proxy-client.crt: {Name:mk6ea6acb6ee7459e3182ed91ab2506f933c6bf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 11:15:43.329807 1350845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/proxy-client.key ...
I1217 11:15:43.329828 1350845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/proxy-client.key: {Name:mkf387330626a1f9c0557f85211ad7b7066f7816 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 11:15:43.330070 1350845 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem (1675 bytes)
I1217 11:15:43.330151 1350845 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem (1082 bytes)
I1217 11:15:43.330180 1350845 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem (1123 bytes)
I1217 11:15:43.330206 1350845 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem (1675 bytes)
I1217 11:15:43.330796 1350845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1217 11:15:43.361157 1350845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1217 11:15:43.390584 1350845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1217 11:15:43.421164 1350845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1217 11:15:43.449959 1350845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1217 11:15:43.493268 1350845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1217 11:15:43.528368 1350845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1217 11:15:43.557442 1350845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
I1217 11:15:43.586386 1350845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1217 11:15:43.616919 1350845 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1217 11:15:43.638028 1350845 ssh_runner.go:195] Run: openssl version
I1217 11:15:43.644308 1350845 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1217 11:15:43.655594 1350845 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1217 11:15:43.667095 1350845 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1217 11:15:43.672175 1350845 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
I1217 11:15:43.672235 1350845 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1217 11:15:43.679759 1350845 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1217 11:15:43.691695 1350845 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1217 11:15:43.703669 1350845 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1217 11:15:43.708405 1350845 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1217 11:15:43.708484 1350845 kubeadm.go:401] StartCluster: {Name:addons-410268 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 C
lusterName:addons-410268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1217 11:15:43.708562 1350845 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1217 11:15:43.708615 1350845 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1217 11:15:43.740471 1350845 cri.go:89] found id: ""
I1217 11:15:43.740553 1350845 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1217 11:15:43.752356 1350845 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1217 11:15:43.763938 1350845 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1217 11:15:43.777880 1350845 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1217 11:15:43.777918 1350845 kubeadm.go:158] found existing configuration files:
I1217 11:15:43.778010 1350845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1217 11:15:43.790999 1350845 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1217 11:15:43.791096 1350845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1217 11:15:43.802800 1350845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1217 11:15:43.813617 1350845 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1217 11:15:43.813701 1350845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1217 11:15:43.825578 1350845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1217 11:15:43.836395 1350845 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1217 11:15:43.836495 1350845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1217 11:15:43.847921 1350845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1217 11:15:43.858810 1350845 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1217 11:15:43.858895 1350845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1217 11:15:43.870185 1350845 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1217 11:15:44.005684 1350845 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1217 11:15:55.523132 1350845 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
I1217 11:15:55.523210 1350845 kubeadm.go:319] [preflight] Running pre-flight checks
I1217 11:15:55.523301 1350845 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1217 11:15:55.523417 1350845 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1217 11:15:55.523541 1350845 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1217 11:15:55.523649 1350845 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1217 11:15:55.525182 1350845 out.go:252] - Generating certificates and keys ...
I1217 11:15:55.525296 1350845 kubeadm.go:319] [certs] Using existing ca certificate authority
I1217 11:15:55.525371 1350845 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1217 11:15:55.525461 1350845 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1217 11:15:55.525557 1350845 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1217 11:15:55.525632 1350845 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1217 11:15:55.525679 1350845 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1217 11:15:55.525729 1350845 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1217 11:15:55.525874 1350845 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-410268 localhost] and IPs [192.168.39.28 127.0.0.1 ::1]
I1217 11:15:55.525964 1350845 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1217 11:15:55.526122 1350845 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-410268 localhost] and IPs [192.168.39.28 127.0.0.1 ::1]
I1217 11:15:55.526180 1350845 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1217 11:15:55.526255 1350845 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1217 11:15:55.526294 1350845 kubeadm.go:319] [certs] Generating "sa" key and public key
I1217 11:15:55.526336 1350845 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1217 11:15:55.526375 1350845 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1217 11:15:55.526447 1350845 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1217 11:15:55.526513 1350845 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1217 11:15:55.526566 1350845 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1217 11:15:55.526659 1350845 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1217 11:15:55.526776 1350845 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1217 11:15:55.526870 1350845 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1217 11:15:55.528161 1350845 out.go:252] - Booting up control plane ...
I1217 11:15:55.528262 1350845 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1217 11:15:55.528349 1350845 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1217 11:15:55.528429 1350845 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1217 11:15:55.528544 1350845 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1217 11:15:55.528668 1350845 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1217 11:15:55.528820 1350845 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1217 11:15:55.528951 1350845 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1217 11:15:55.529031 1350845 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1217 11:15:55.529158 1350845 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1217 11:15:55.529319 1350845 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1217 11:15:55.529403 1350845 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002568436s
I1217 11:15:55.529516 1350845 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1217 11:15:55.529613 1350845 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.28:8443/livez
I1217 11:15:55.529699 1350845 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1217 11:15:55.529774 1350845 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1217 11:15:55.529839 1350845 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.263479249s
I1217 11:15:55.529896 1350845 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.536625223s
I1217 11:15:55.529955 1350845 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501854635s
I1217 11:15:55.530062 1350845 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1217 11:15:55.530234 1350845 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1217 11:15:55.530302 1350845 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1217 11:15:55.530525 1350845 kubeadm.go:319] [mark-control-plane] Marking the node addons-410268 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1217 11:15:55.530591 1350845 kubeadm.go:319] [bootstrap-token] Using token: 43l6ve.l582r2mo3awbrhao
I1217 11:15:55.532627 1350845 out.go:252] - Configuring RBAC rules ...
I1217 11:15:55.532727 1350845 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1217 11:15:55.532804 1350845 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1217 11:15:55.532927 1350845 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1217 11:15:55.533086 1350845 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1217 11:15:55.533294 1350845 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1217 11:15:55.533425 1350845 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1217 11:15:55.533566 1350845 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1217 11:15:55.533633 1350845 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1217 11:15:55.533696 1350845 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1217 11:15:55.533705 1350845 kubeadm.go:319]
I1217 11:15:55.533791 1350845 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1217 11:15:55.533800 1350845 kubeadm.go:319]
I1217 11:15:55.533903 1350845 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1217 11:15:55.533913 1350845 kubeadm.go:319]
I1217 11:15:55.533951 1350845 kubeadm.go:319] mkdir -p $HOME/.kube
I1217 11:15:55.534056 1350845 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1217 11:15:55.534143 1350845 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1217 11:15:55.534154 1350845 kubeadm.go:319]
I1217 11:15:55.534233 1350845 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1217 11:15:55.534246 1350845 kubeadm.go:319]
I1217 11:15:55.534314 1350845 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1217 11:15:55.534322 1350845 kubeadm.go:319]
I1217 11:15:55.534381 1350845 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1217 11:15:55.534451 1350845 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1217 11:15:55.534536 1350845 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1217 11:15:55.534547 1350845 kubeadm.go:319]
I1217 11:15:55.534656 1350845 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1217 11:15:55.534757 1350845 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1217 11:15:55.534771 1350845 kubeadm.go:319]
I1217 11:15:55.534892 1350845 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 43l6ve.l582r2mo3awbrhao \
I1217 11:15:55.535006 1350845 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:03d71c4919b4a2b722377932ade21f7a19ec06bb9a5b5ca567ebf14ade8ad6b0 \
I1217 11:15:55.535029 1350845 kubeadm.go:319] --control-plane
I1217 11:15:55.535033 1350845 kubeadm.go:319]
I1217 11:15:55.535109 1350845 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1217 11:15:55.535118 1350845 kubeadm.go:319]
I1217 11:15:55.535193 1350845 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 43l6ve.l582r2mo3awbrhao \
I1217 11:15:55.535298 1350845 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:03d71c4919b4a2b722377932ade21f7a19ec06bb9a5b5ca567ebf14ade8ad6b0
I1217 11:15:55.535317 1350845 cni.go:84] Creating CNI manager for ""
I1217 11:15:55.535329 1350845 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1217 11:15:55.537517 1350845 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1217 11:15:55.538705 1350845 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1217 11:15:55.551978 1350845 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1217 11:15:55.577547 1350845 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1217 11:15:55.577626 1350845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 11:15:55.577702 1350845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-410268 minikube.k8s.io/updated_at=2025_12_17T11_15_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869 minikube.k8s.io/name=addons-410268 minikube.k8s.io/primary=true
I1217 11:15:55.623901 1350845 ops.go:34] apiserver oom_adj: -16
I1217 11:15:55.726309 1350845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 11:15:56.227208 1350845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 11:15:56.727180 1350845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 11:15:57.227401 1350845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 11:15:57.727100 1350845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 11:15:58.226689 1350845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 11:15:58.727262 1350845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 11:15:59.226401 1350845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 11:15:59.727304 1350845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 11:16:00.227368 1350845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1217 11:16:00.338042 1350845 kubeadm.go:1114] duration metric: took 4.760491571s to wait for elevateKubeSystemPrivileges
I1217 11:16:00.338080 1350845 kubeadm.go:403] duration metric: took 16.629604919s to StartCluster
I1217 11:16:00.338102 1350845 settings.go:142] acquiring lock: {Name:mkab196c8ac23f97b54763cecaa5ac5ac8f7dd0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 11:16:00.338257 1350845 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/21808-1345916/kubeconfig
I1217 11:16:00.338838 1350845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/kubeconfig: {Name:mkf9f7ccd4382c7fd64f6772f4fae6c99a70cf57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 11:16:00.339139 1350845 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1217 11:16:00.339160 1350845 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1217 11:16:00.339131 1350845 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
I1217 11:16:00.339263 1350845 addons.go:70] Setting yakd=true in profile "addons-410268"
I1217 11:16:00.339272 1350845 addons.go:70] Setting default-storageclass=true in profile "addons-410268"
I1217 11:16:00.339281 1350845 addons.go:239] Setting addon yakd=true in "addons-410268"
I1217 11:16:00.339284 1350845 addons.go:70] Setting inspektor-gadget=true in profile "addons-410268"
I1217 11:16:00.339324 1350845 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-410268"
I1217 11:16:00.339329 1350845 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-410268"
I1217 11:16:00.339340 1350845 addons.go:70] Setting ingress=true in profile "addons-410268"
I1217 11:16:00.339349 1350845 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-410268"
I1217 11:16:00.339352 1350845 addons.go:70] Setting registry=true in profile "addons-410268"
I1217 11:16:00.339361 1350845 addons.go:239] Setting addon ingress=true in "addons-410268"
I1217 11:16:00.339369 1350845 addons.go:239] Setting addon registry=true in "addons-410268"
I1217 11:16:00.339372 1350845 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-410268"
I1217 11:16:00.339400 1350845 host.go:66] Checking if "addons-410268" exists ...
I1217 11:16:00.339406 1350845 host.go:66] Checking if "addons-410268" exists ...
I1217 11:16:00.339421 1350845 config.go:182] Loaded profile config "addons-410268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 11:16:00.339434 1350845 host.go:66] Checking if "addons-410268" exists ...
I1217 11:16:00.339469 1350845 addons.go:70] Setting metrics-server=true in profile "addons-410268"
I1217 11:16:00.339482 1350845 addons.go:239] Setting addon metrics-server=true in "addons-410268"
I1217 11:16:00.339501 1350845 host.go:66] Checking if "addons-410268" exists ...
I1217 11:16:00.339295 1350845 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-410268"
I1217 11:16:00.339307 1350845 host.go:66] Checking if "addons-410268" exists ...
I1217 11:16:00.340435 1350845 addons.go:70] Setting ingress-dns=true in profile "addons-410268"
I1217 11:16:00.340492 1350845 addons.go:239] Setting addon ingress-dns=true in "addons-410268"
I1217 11:16:00.340535 1350845 host.go:66] Checking if "addons-410268" exists ...
I1217 11:16:00.340763 1350845 addons.go:70] Setting volcano=true in profile "addons-410268"
I1217 11:16:00.339304 1350845 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-410268"
I1217 11:16:00.340794 1350845 addons.go:239] Setting addon volcano=true in "addons-410268"
I1217 11:16:00.340799 1350845 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-410268"
I1217 11:16:00.340828 1350845 host.go:66] Checking if "addons-410268" exists ...
I1217 11:16:00.339318 1350845 addons.go:239] Setting addon inspektor-gadget=true in "addons-410268"
I1217 11:16:00.340849 1350845 host.go:66] Checking if "addons-410268" exists ...
I1217 11:16:00.340830 1350845 host.go:66] Checking if "addons-410268" exists ...
I1217 11:16:00.341293 1350845 out.go:179] * Verifying Kubernetes components...
I1217 11:16:00.339372 1350845 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-410268"
I1217 11:16:00.341503 1350845 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-410268"
I1217 11:16:00.341530 1350845 addons.go:70] Setting volumesnapshots=true in profile "addons-410268"
I1217 11:16:00.341544 1350845 host.go:66] Checking if "addons-410268" exists ...
I1217 11:16:00.341547 1350845 addons.go:239] Setting addon volumesnapshots=true in "addons-410268"
I1217 11:16:00.341572 1350845 host.go:66] Checking if "addons-410268" exists ...
I1217 11:16:00.339315 1350845 addons.go:70] Setting storage-provisioner=true in profile "addons-410268"
I1217 11:16:00.339318 1350845 addons.go:70] Setting cloud-spanner=true in profile "addons-410268"
I1217 11:16:00.341928 1350845 addons.go:239] Setting addon cloud-spanner=true in "addons-410268"
I1217 11:16:00.341976 1350845 host.go:66] Checking if "addons-410268" exists ...
I1217 11:16:00.339328 1350845 addons.go:70] Setting gcp-auth=true in profile "addons-410268"
I1217 11:16:00.342027 1350845 mustload.go:66] Loading cluster: addons-410268
I1217 11:16:00.339332 1350845 addons.go:70] Setting registry-creds=true in profile "addons-410268"
I1217 11:16:00.342061 1350845 addons.go:239] Setting addon registry-creds=true in "addons-410268"
I1217 11:16:00.342093 1350845 host.go:66] Checking if "addons-410268" exists ...
I1217 11:16:00.342245 1350845 config.go:182] Loaded profile config "addons-410268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 11:16:00.341905 1350845 addons.go:239] Setting addon storage-provisioner=true in "addons-410268"
I1217 11:16:00.342340 1350845 host.go:66] Checking if "addons-410268" exists ...
I1217 11:16:00.343127 1350845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1217 11:16:00.347113 1350845 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1217 11:16:00.347233 1350845 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1217 11:16:00.347928 1350845 addons.go:239] Setting addon default-storageclass=true in "addons-410268"
I1217 11:16:00.347928 1350845 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-410268"
I1217 11:16:00.348043 1350845 host.go:66] Checking if "addons-410268" exists ...
I1217 11:16:00.348000 1350845 host.go:66] Checking if "addons-410268" exists ...
I1217 11:16:00.348345 1350845 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1217 11:16:00.348364 1350845 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1217 11:16:00.348405 1350845 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1217 11:16:00.349587 1350845 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1217 11:16:00.349589 1350845 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.6
W1217 11:16:00.349672 1350845 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1217 11:16:00.349716 1350845 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
I1217 11:16:00.349728 1350845 out.go:179] - Using image docker.io/registry:3.0.0
I1217 11:16:00.350777 1350845 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
I1217 11:16:00.350797 1350845 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1217 11:16:00.351538 1350845 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1217 11:16:00.351558 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1217 11:16:00.351667 1350845 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1217 11:16:00.352061 1350845 host.go:66] Checking if "addons-410268" exists ...
I1217 11:16:00.352091 1350845 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
I1217 11:16:00.352094 1350845 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
I1217 11:16:00.352100 1350845 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
I1217 11:16:00.352108 1350845 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1217 11:16:00.352091 1350845 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1217 11:16:00.352123 1350845 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1217 11:16:00.352151 1350845 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
I1217 11:16:00.352882 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1217 11:16:00.352954 1350845 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1217 11:16:00.352463 1350845 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1217 11:16:00.353725 1350845 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1217 11:16:00.353744 1350845 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1217 11:16:00.353777 1350845 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
I1217 11:16:00.354105 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1217 11:16:00.353902 1350845 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1217 11:16:00.354274 1350845 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1217 11:16:00.354503 1350845 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1217 11:16:00.354511 1350845 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1217 11:16:00.354524 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1217 11:16:00.354530 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1217 11:16:00.354538 1350845 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
I1217 11:16:00.354550 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
I1217 11:16:00.354706 1350845 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
I1217 11:16:00.354735 1350845 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1217 11:16:00.355075 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1217 11:16:00.355514 1350845 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1217 11:16:00.355556 1350845 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1217 11:16:00.356042 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1217 11:16:00.356406 1350845 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1217 11:16:00.356542 1350845 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1217 11:16:00.356832 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1217 11:16:00.358578 1350845 out.go:179] - Using image docker.io/busybox:stable
I1217 11:16:00.358599 1350845 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1217 11:16:00.359115 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.359783 1350845 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1217 11:16:00.359800 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1217 11:16:00.360824 1350845 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1217 11:16:00.361440 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:16:00.361475 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.362292 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.362395 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
I1217 11:16:00.362903 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.363152 1350845 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1217 11:16:00.364088 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.364461 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:16:00.364498 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.364860 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:16:00.364916 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.365046 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
I1217 11:16:00.365322 1350845 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1217 11:16:00.365794 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
I1217 11:16:00.365928 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:16:00.365967 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.366436 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.366547 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
I1217 11:16:00.366638 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.367725 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.367766 1350845 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1217 11:16:00.367854 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.368127 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.368442 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:16:00.368479 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.368744 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:16:00.368773 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.368803 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.368880 1350845 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1217 11:16:00.368898 1350845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1217 11:16:00.368915 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.369113 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
I1217 11:16:00.369720 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:16:00.369744 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.369813 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:16:00.369844 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.369892 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
I1217 11:16:00.369860 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.370036 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.370129 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:16:00.370166 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:16:00.370165 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.370187 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.370283 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:16:00.370307 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.370504 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
I1217 11:16:00.370676 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
I1217 11:16:00.370941 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
I1217 11:16:00.370944 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
I1217 11:16:00.371010 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
I1217 11:16:00.371199 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:16:00.371224 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.371511 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:16:00.371549 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.371543 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
I1217 11:16:00.371781 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
I1217 11:16:00.372169 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.372753 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:16:00.372786 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.372959 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
I1217 11:16:00.374371 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.374831 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:16:00.374851 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:00.375030 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
W1217 11:16:00.746241 1350845 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:37266->192.168.39.28:22: read: connection reset by peer
I1217 11:16:00.746304 1350845 retry.go:31] will retry after 298.461677ms: ssh: handshake failed: read tcp 192.168.39.1:37266->192.168.39.28:22: read: connection reset by peer
W1217 11:16:00.808332 1350845 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:37284->192.168.39.28:22: read: connection reset by peer
I1217 11:16:00.808368 1350845 retry.go:31] will retry after 197.5272ms: ssh: handshake failed: read tcp 192.168.39.1:37284->192.168.39.28:22: read: connection reset by peer
I1217 11:16:01.316212 1350845 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
I1217 11:16:01.316254 1350845 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1217 11:16:01.317287 1350845 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1217 11:16:01.317305 1350845 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1217 11:16:01.321082 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1217 11:16:01.327792 1350845 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
I1217 11:16:01.327820 1350845 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1217 11:16:01.368636 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
I1217 11:16:01.371462 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1217 11:16:01.400566 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1217 11:16:01.405126 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1217 11:16:01.406779 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1217 11:16:01.510859 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1217 11:16:01.511125 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1217 11:16:01.583201 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1217 11:16:01.615683 1350845 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
I1217 11:16:01.615720 1350845 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1217 11:16:01.791361 1350845 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1217 11:16:01.791387 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1217 11:16:01.915513 1350845 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
I1217 11:16:01.915537 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1217 11:16:01.943003 1350845 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1217 11:16:01.943038 1350845 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1217 11:16:02.058501 1350845 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1217 11:16:02.058540 1350845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1217 11:16:02.134291 1350845 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.795102297s)
I1217 11:16:02.134358 1350845 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.791152272s)
I1217 11:16:02.134449 1350845 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1217 11:16:02.134493 1350845 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1217 11:16:02.140834 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1217 11:16:02.203927 1350845 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
I1217 11:16:02.203964 1350845 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1217 11:16:02.297250 1350845 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1217 11:16:02.297288 1350845 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1217 11:16:02.340129 1350845 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1217 11:16:02.340159 1350845 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1217 11:16:02.344606 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1217 11:16:02.437144 1350845 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1217 11:16:02.437177 1350845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1217 11:16:02.502765 1350845 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
I1217 11:16:02.502800 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1217 11:16:02.615967 1350845 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1217 11:16:02.616026 1350845 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1217 11:16:02.644799 1350845 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1217 11:16:02.644847 1350845 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1217 11:16:02.805794 1350845 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1217 11:16:02.805836 1350845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1217 11:16:02.892541 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1217 11:16:03.030936 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1217 11:16:03.042085 1350845 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1217 11:16:03.042126 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1217 11:16:03.219517 1350845 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1217 11:16:03.219558 1350845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1217 11:16:03.355025 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.033901875s)
I1217 11:16:03.461505 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1217 11:16:03.557899 1350845 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1217 11:16:03.557930 1350845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1217 11:16:03.935096 1350845 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1217 11:16:03.935126 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1217 11:16:04.201973 1350845 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1217 11:16:04.202013 1350845 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1217 11:16:04.501645 1350845 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1217 11:16:04.501678 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1217 11:16:04.866099 1350845 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1217 11:16:04.866135 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1217 11:16:05.324589 1350845 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1217 11:16:05.324617 1350845 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1217 11:16:05.569817 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1217 11:16:06.724055 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (5.355381318s)
I1217 11:16:06.724144 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.352648166s)
I1217 11:16:06.724229 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.319077403s)
I1217 11:16:06.724283 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.323691313s)
I1217 11:16:06.724368 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.317560985s)
I1217 11:16:06.724440 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.213287726s)
I1217 11:16:07.772293 1350845 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1217 11:16:07.775596 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:07.776100 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:16:07.776143 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:07.776333 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
I1217 11:16:08.153076 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.642170136s)
I1217 11:16:08.153132 1350845 addons.go:495] Verifying addon ingress=true in "addons-410268"
I1217 11:16:08.153196 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.569954536s)
I1217 11:16:08.153250 1350845 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.018779215s)
I1217 11:16:08.153405 1350845 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.018885637s)
I1217 11:16:08.153436 1350845 start.go:1013] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1217 11:16:08.153477 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (6.012588453s)
I1217 11:16:08.153554 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.808910052s)
I1217 11:16:08.153581 1350845 addons.go:495] Verifying addon registry=true in "addons-410268"
I1217 11:16:08.153733 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.122756962s)
I1217 11:16:08.153774 1350845 addons.go:495] Verifying addon metrics-server=true in "addons-410268"
I1217 11:16:08.153633 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.261054783s)
I1217 11:16:08.154290 1350845 node_ready.go:35] waiting up to 6m0s for node "addons-410268" to be "Ready" ...
I1217 11:16:08.155148 1350845 out.go:179] * Verifying registry addon...
I1217 11:16:08.155157 1350845 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-410268 service yakd-dashboard -n yakd-dashboard
I1217 11:16:08.155148 1350845 out.go:179] * Verifying ingress addon...
I1217 11:16:08.157159 1350845 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1217 11:16:08.157362 1350845 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1217 11:16:08.192726 1350845 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1217 11:16:08.211858 1350845 node_ready.go:49] node "addons-410268" is "Ready"
I1217 11:16:08.211890 1350845 node_ready.go:38] duration metric: took 57.576108ms for node "addons-410268" to be "Ready" ...
I1217 11:16:08.211910 1350845 api_server.go:52] waiting for apiserver process to appear ...
I1217 11:16:08.211973 1350845 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1217 11:16:08.237579 1350845 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1217 11:16:08.237596 1350845 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1217 11:16:08.237603 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:08.237611 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:08.373486 1350845 addons.go:239] Setting addon gcp-auth=true in "addons-410268"
I1217 11:16:08.373555 1350845 host.go:66] Checking if "addons-410268" exists ...
I1217 11:16:08.375819 1350845 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1217 11:16:08.378843 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:08.379398 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
I1217 11:16:08.379437 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
I1217 11:16:08.379645 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
I1217 11:16:08.789395 1350845 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-410268" context rescaled to 1 replicas
I1217 11:16:08.796855 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:08.802172 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:09.048869 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.587319964s)
W1217 11:16:09.048945 1350845 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1217 11:16:09.048994 1350845 retry.go:31] will retry after 254.816128ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1217 11:16:09.217737 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:09.217999 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:09.304709 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1217 11:16:09.674715 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:09.675113 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:10.164483 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:10.164559 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:10.169221 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.599348398s)
I1217 11:16:10.169247 1350845 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.957246506s)
I1217 11:16:10.169266 1350845 api_server.go:72] duration metric: took 9.829991807s to wait for apiserver process to appear ...
I1217 11:16:10.169263 1350845 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-410268"
I1217 11:16:10.169274 1350845 api_server.go:88] waiting for apiserver healthz status ...
I1217 11:16:10.169295 1350845 api_server.go:253] Checking apiserver healthz at https://192.168.39.28:8443/healthz ...
I1217 11:16:10.169316 1350845 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.793467482s)
I1217 11:16:10.170896 1350845 out.go:179] * Verifying csi-hostpath-driver addon...
I1217 11:16:10.170908 1350845 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1217 11:16:10.172141 1350845 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1217 11:16:10.172723 1350845 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1217 11:16:10.173213 1350845 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1217 11:16:10.173235 1350845 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1217 11:16:10.190414 1350845 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1217 11:16:10.190439 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:10.199394 1350845 api_server.go:279] https://192.168.39.28:8443/healthz returned 200:
ok
I1217 11:16:10.217685 1350845 api_server.go:141] control plane version: v1.34.3
I1217 11:16:10.217721 1350845 api_server.go:131] duration metric: took 48.440983ms to wait for apiserver health ...
I1217 11:16:10.217731 1350845 system_pods.go:43] waiting for kube-system pods to appear ...
I1217 11:16:10.269581 1350845 system_pods.go:59] 20 kube-system pods found
I1217 11:16:10.269621 1350845 system_pods.go:61] "amd-gpu-device-plugin-7vz7s" [d5f6f486-f31c-465a-bbac-0cabfeabfa57] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1217 11:16:10.269630 1350845 system_pods.go:61] "coredns-66bc5c9577-f9dfv" [b3c65235-f139-4f33-adef-fc6ef1ccb253] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1217 11:16:10.269638 1350845 system_pods.go:61] "coredns-66bc5c9577-svfjn" [e8aebe9d-3a17-487e-be9b-4e688cd2b8bd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1217 11:16:10.269644 1350845 system_pods.go:61] "csi-hostpath-attacher-0" [67dac145-8016-43d9-913c-e078ba2ba440] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1217 11:16:10.269650 1350845 system_pods.go:61] "csi-hostpath-resizer-0" [70902375-0f7e-4cac-902c-bfb8dc1b0407] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1217 11:16:10.269668 1350845 system_pods.go:61] "csi-hostpathplugin-674kp" [8d5e02ac-f5bd-46e2-8ddb-18cdde14e1bc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1217 11:16:10.269674 1350845 system_pods.go:61] "etcd-addons-410268" [05d85a95-6449-4397-adfb-9a20407a423a] Running
I1217 11:16:10.269679 1350845 system_pods.go:61] "kube-apiserver-addons-410268" [13816250-d3c5-4d81-ad74-ffe9cb3ddbc5] Running
I1217 11:16:10.269687 1350845 system_pods.go:61] "kube-controller-manager-addons-410268" [5aa78d38-35e1-472f-8299-cfc242fca369] Running
I1217 11:16:10.269696 1350845 system_pods.go:61] "kube-ingress-dns-minikube" [6073097f-5ea5-4564-9be4-35f9191742dc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1217 11:16:10.269701 1350845 system_pods.go:61] "kube-proxy-6pdv6" [c6d1e053-5420-4db6-a1f6-daab3034e85c] Running
I1217 11:16:10.269722 1350845 system_pods.go:61] "kube-scheduler-addons-410268" [2401fedc-c4f4-48eb-9807-2abc585513d0] Running
I1217 11:16:10.269730 1350845 system_pods.go:61] "metrics-server-85b7d694d7-wzdd7" [45eadf4d-9bab-4bbf-88c7-99c4433a113d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1217 11:16:10.269741 1350845 system_pods.go:61] "nvidia-device-plugin-daemonset-5czqh" [22222c18-08cb-4be5-93fc-4e2715120b95] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1217 11:16:10.269751 1350845 system_pods.go:61] "registry-6b586f9694-zzpqs" [5234c3bf-e000-4d51-80db-779c52aba6bd] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1217 11:16:10.269756 1350845 system_pods.go:61] "registry-creds-764b6fb674-4z6q4" [eb27db8e-73bb-47b3-b506-a5be0bb9dbdb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1217 11:16:10.269761 1350845 system_pods.go:61] "registry-proxy-tgq9f" [acc44f29-6589-4709-855b-7ecb669c57b3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1217 11:16:10.269766 1350845 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4d5hl" [47a9cd1f-a9eb-4de4-abf7-4a920d621e74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1217 11:16:10.269771 1350845 system_pods.go:61] "snapshot-controller-7d9fbc56b8-cgr4k" [54fa31c8-1652-4949-a486-f0f561074620] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1217 11:16:10.269775 1350845 system_pods.go:61] "storage-provisioner" [c32b2c11-7a98-48ba-89d5-3a5e581c171b] Running
I1217 11:16:10.269782 1350845 system_pods.go:74] duration metric: took 52.044895ms to wait for pod list to return data ...
I1217 11:16:10.269792 1350845 default_sa.go:34] waiting for default service account to be created ...
I1217 11:16:10.275299 1350845 default_sa.go:45] found service account: "default"
I1217 11:16:10.275318 1350845 default_sa.go:55] duration metric: took 5.520735ms for default service account to be created ...
I1217 11:16:10.275326 1350845 system_pods.go:116] waiting for k8s-apps to be running ...
I1217 11:16:10.280468 1350845 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1217 11:16:10.280491 1350845 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1217 11:16:10.280935 1350845 system_pods.go:86] 20 kube-system pods found
I1217 11:16:10.280970 1350845 system_pods.go:89] "amd-gpu-device-plugin-7vz7s" [d5f6f486-f31c-465a-bbac-0cabfeabfa57] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1217 11:16:10.280976 1350845 system_pods.go:89] "coredns-66bc5c9577-f9dfv" [b3c65235-f139-4f33-adef-fc6ef1ccb253] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1217 11:16:10.281007 1350845 system_pods.go:89] "coredns-66bc5c9577-svfjn" [e8aebe9d-3a17-487e-be9b-4e688cd2b8bd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1217 11:16:10.281015 1350845 system_pods.go:89] "csi-hostpath-attacher-0" [67dac145-8016-43d9-913c-e078ba2ba440] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1217 11:16:10.281023 1350845 system_pods.go:89] "csi-hostpath-resizer-0" [70902375-0f7e-4cac-902c-bfb8dc1b0407] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1217 11:16:10.281032 1350845 system_pods.go:89] "csi-hostpathplugin-674kp" [8d5e02ac-f5bd-46e2-8ddb-18cdde14e1bc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1217 11:16:10.281042 1350845 system_pods.go:89] "etcd-addons-410268" [05d85a95-6449-4397-adfb-9a20407a423a] Running
I1217 11:16:10.281049 1350845 system_pods.go:89] "kube-apiserver-addons-410268" [13816250-d3c5-4d81-ad74-ffe9cb3ddbc5] Running
I1217 11:16:10.281054 1350845 system_pods.go:89] "kube-controller-manager-addons-410268" [5aa78d38-35e1-472f-8299-cfc242fca369] Running
I1217 11:16:10.281061 1350845 system_pods.go:89] "kube-ingress-dns-minikube" [6073097f-5ea5-4564-9be4-35f9191742dc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1217 11:16:10.281066 1350845 system_pods.go:89] "kube-proxy-6pdv6" [c6d1e053-5420-4db6-a1f6-daab3034e85c] Running
I1217 11:16:10.281070 1350845 system_pods.go:89] "kube-scheduler-addons-410268" [2401fedc-c4f4-48eb-9807-2abc585513d0] Running
I1217 11:16:10.281075 1350845 system_pods.go:89] "metrics-server-85b7d694d7-wzdd7" [45eadf4d-9bab-4bbf-88c7-99c4433a113d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1217 11:16:10.281089 1350845 system_pods.go:89] "nvidia-device-plugin-daemonset-5czqh" [22222c18-08cb-4be5-93fc-4e2715120b95] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1217 11:16:10.281094 1350845 system_pods.go:89] "registry-6b586f9694-zzpqs" [5234c3bf-e000-4d51-80db-779c52aba6bd] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1217 11:16:10.281099 1350845 system_pods.go:89] "registry-creds-764b6fb674-4z6q4" [eb27db8e-73bb-47b3-b506-a5be0bb9dbdb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1217 11:16:10.281103 1350845 system_pods.go:89] "registry-proxy-tgq9f" [acc44f29-6589-4709-855b-7ecb669c57b3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1217 11:16:10.281109 1350845 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4d5hl" [47a9cd1f-a9eb-4de4-abf7-4a920d621e74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1217 11:16:10.281118 1350845 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cgr4k" [54fa31c8-1652-4949-a486-f0f561074620] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1217 11:16:10.281124 1350845 system_pods.go:89] "storage-provisioner" [c32b2c11-7a98-48ba-89d5-3a5e581c171b] Running
I1217 11:16:10.281134 1350845 system_pods.go:126] duration metric: took 5.801932ms to wait for k8s-apps to be running ...
I1217 11:16:10.281145 1350845 system_svc.go:44] waiting for kubelet service to be running ....
I1217 11:16:10.281197 1350845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1217 11:16:10.369790 1350845 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1217 11:16:10.369815 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1217 11:16:10.421214 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1217 11:16:10.663238 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:10.663813 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:10.677340 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:10.991243 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.686475947s)
I1217 11:16:10.991280 1350845 system_svc.go:56] duration metric: took 710.125534ms WaitForService to wait for kubelet
I1217 11:16:10.991310 1350845 kubeadm.go:587] duration metric: took 10.652032407s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1217 11:16:10.991331 1350845 node_conditions.go:102] verifying NodePressure condition ...
I1217 11:16:10.997161 1350845 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1217 11:16:10.997184 1350845 node_conditions.go:123] node cpu capacity is 2
I1217 11:16:10.997205 1350845 node_conditions.go:105] duration metric: took 5.869128ms to run NodePressure ...
I1217 11:16:10.997219 1350845 start.go:242] waiting for startup goroutines ...
I1217 11:16:11.161706 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:11.163637 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:11.176522 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:11.566474 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.145204213s)
I1217 11:16:11.567507 1350845 addons.go:495] Verifying addon gcp-auth=true in "addons-410268"
I1217 11:16:11.569240 1350845 out.go:179] * Verifying gcp-auth addon...
I1217 11:16:11.570879 1350845 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1217 11:16:11.596054 1350845 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1217 11:16:11.596073 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:11.671291 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:11.673706 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:11.684244 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:12.077315 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:12.178555 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:12.178936 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:12.181602 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:12.577539 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:12.669030 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:12.669528 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:12.679866 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:13.075174 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:13.178731 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:13.179395 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:13.180030 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:13.576924 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:13.665890 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:13.666082 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:13.678398 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:14.078560 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:14.164144 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:14.164505 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:14.177574 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:14.576234 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:14.660840 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:14.660849 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:14.675762 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:15.075839 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:15.175893 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:15.176798 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:15.178011 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:15.574251 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:15.660751 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:15.661469 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:15.676483 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:16.075670 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:16.160596 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:16.162585 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:16.176198 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:16.574716 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:16.661463 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:16.662049 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:16.676136 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:17.074579 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:17.162321 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:17.162372 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:17.177632 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:17.577901 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:17.665877 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:17.666282 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:17.677730 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:18.075105 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:18.161380 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:18.162494 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:18.175803 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:18.574399 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:18.661163 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:18.662266 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:18.675815 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:19.074550 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:19.161438 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:19.163965 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:19.176343 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:19.575028 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:19.661552 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:19.661809 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:19.676491 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:20.075886 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:20.160831 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:20.161378 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:20.177384 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:20.574922 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:20.663506 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:20.663561 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:20.677561 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:21.074921 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:21.161954 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:21.161995 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:21.176474 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:21.576348 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:21.660914 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:21.661526 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:21.677906 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:22.073886 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:22.163050 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:22.163186 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:22.177096 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:22.575354 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:22.661395 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:22.662419 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:22.676018 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:23.075081 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:23.165541 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:23.167598 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:23.177059 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:23.578724 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:23.661590 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:23.663583 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:23.677915 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:24.075404 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:24.160554 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:24.160694 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:24.177424 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:24.574935 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:24.661573 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:24.661741 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:24.676606 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:25.075326 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:25.160646 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:25.161501 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:25.177061 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:25.578481 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:25.660834 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:25.660898 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:25.676665 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:26.075184 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:26.161150 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:26.161637 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:26.176309 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:26.574513 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:26.662848 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:26.663088 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:26.677420 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:27.075021 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:27.165418 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:27.168212 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:27.177490 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:27.578975 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:27.679301 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:27.679322 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:27.679431 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:28.074996 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:28.162680 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:28.162778 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:28.176862 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:28.584338 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:28.661447 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:28.661575 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:28.676474 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:29.075446 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:29.160533 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:29.160637 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:29.176804 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:29.575291 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:29.660666 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:29.661377 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:29.676747 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:30.075876 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:30.163324 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:30.164346 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:30.176842 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:30.574940 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:30.661784 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:30.663890 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:30.675562 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:31.075929 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:31.166766 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:31.166801 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:31.177511 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:31.576459 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:31.662025 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:31.663438 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:31.676271 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:32.073952 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:32.167131 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:32.168569 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:32.176145 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:32.575025 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:32.662498 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:32.662590 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:32.677052 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:33.075072 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:33.167106 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:33.167615 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:33.178785 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:33.610737 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:33.664647 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:33.665518 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:33.678347 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:34.233974 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:34.234835 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:34.235062 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:34.235131 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:34.577251 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:34.662801 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:34.662918 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:34.678374 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:35.074819 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:35.163517 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:35.164629 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:35.178234 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:35.576414 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:35.682465 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:35.685868 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:35.686399 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:36.074440 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:36.164533 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:36.164834 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:36.176315 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:36.574640 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:36.661109 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:36.661180 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:36.675787 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:37.074602 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:37.162850 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:37.163685 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:37.175928 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:37.574570 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:37.660699 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:37.660733 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:37.676394 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:38.074553 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:38.160993 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:38.161106 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:38.175660 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:38.573792 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:38.661687 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:38.662344 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:38.675881 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:39.074461 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:39.162521 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:39.162522 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:39.179335 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:39.577466 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:39.661678 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:39.665473 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:39.677752 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:40.076698 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:40.161626 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:40.162530 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:40.178115 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:40.574738 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:40.662585 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:40.664496 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:40.676195 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:41.075348 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:41.160064 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:41.160600 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:41.175858 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:41.573977 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:41.661298 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1217 11:16:41.661412 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:41.677233 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:42.074642 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:42.162370 1350845 kapi.go:107] duration metric: took 34.005210017s to wait for kubernetes.io/minikube-addons=registry ...
I1217 11:16:42.162998 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:42.176207 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:42.574973 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:42.662772 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:42.677621 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:43.076321 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:43.166421 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:43.180348 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:43.580191 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:43.661835 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:43.678890 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:44.075709 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:44.161206 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:44.177565 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:44.575449 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:44.660314 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:44.677310 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:45.075559 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:45.162555 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:45.177848 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:45.576394 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:45.677122 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:45.677119 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:46.077345 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:46.163867 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:46.178025 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:46.576449 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:46.662086 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:46.677836 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:47.074518 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:47.160762 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:47.181093 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:47.586394 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:47.663437 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:47.676129 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:48.077828 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:48.176246 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:48.178040 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:48.573888 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:48.661534 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:48.676907 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:49.073976 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:49.162150 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:49.176845 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:49.573961 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:49.660950 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:49.675356 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:50.086154 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:50.186089 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:50.186114 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:50.575854 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:50.661334 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:50.677634 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:51.075909 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:51.178155 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:51.178911 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:51.574954 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:51.661229 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:51.676624 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:52.075849 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:52.165840 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:52.178754 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:52.755061 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:52.755262 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:52.755299 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:53.075329 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:53.177445 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:53.178367 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:53.574856 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:53.661045 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:53.675780 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:54.075052 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:54.162506 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:54.177657 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:54.575537 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:54.660491 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:54.676454 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:55.075631 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:55.161566 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:55.178815 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:55.579530 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:55.663390 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:55.676248 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:56.077402 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:56.164871 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:56.177693 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:56.575714 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:56.669281 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:56.678256 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:57.075612 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:57.162041 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:57.176334 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:57.574849 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:57.663232 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:57.678319 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:58.078281 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:58.162125 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:58.176699 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:58.574529 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:58.660833 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:58.676528 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:59.075836 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:59.162431 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:59.178172 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:16:59.576102 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:16:59.662040 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:16:59.676834 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:00.074342 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:00.177468 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:00.178525 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:00.573931 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:00.663701 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:00.676638 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:01.074800 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:01.174581 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:01.181723 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:01.575299 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:01.675065 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:01.676728 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:02.077280 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:02.168342 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:02.178186 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:02.576385 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:02.662754 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:02.678905 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:03.075474 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:03.161888 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:03.177390 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:03.575756 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:03.661305 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:03.676587 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:04.077570 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:04.177739 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:04.178190 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:04.576839 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:04.661687 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:04.677968 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:05.075051 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:05.163297 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:05.177547 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:05.676519 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:05.676584 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:05.678259 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:06.078746 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:06.178297 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:06.178452 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:06.577539 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:06.664457 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:06.678036 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:07.074614 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:07.175610 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:07.177103 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:07.573576 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:07.662118 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:07.678306 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:08.074760 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:08.161485 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:08.177836 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:08.574307 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:08.676338 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:08.678243 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:09.077126 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:09.168293 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:09.179534 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:09.575570 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:09.672829 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:09.676325 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:10.074736 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:10.161975 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:10.176329 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:10.575793 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:10.664234 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:10.677141 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:11.076543 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:11.163544 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:11.176085 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:11.574610 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:11.661143 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:11.676147 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:12.078429 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:12.170668 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:12.178113 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:12.575626 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:12.661665 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:12.677456 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:13.077541 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:13.178912 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:13.178960 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:13.574944 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:13.665056 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:13.676144 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1217 11:17:14.078386 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:14.184806 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:14.185706 1350845 kapi.go:107] duration metric: took 1m4.012983057s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1217 11:17:14.574367 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:14.660347 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:15.222334 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:15.222566 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:15.577902 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:15.665571 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:16.079034 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:16.163031 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:16.576381 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:16.667487 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:17.075100 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:17.162207 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:17.575187 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:17.663282 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:18.253430 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:18.254173 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:18.575684 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:18.661028 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:19.076688 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:19.161392 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:19.574812 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:19.660762 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:20.076261 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:20.177133 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:20.575070 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:20.661318 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1217 11:17:21.075719 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:21.161434 1350845 kapi.go:107] duration metric: took 1m13.004075369s to wait for app.kubernetes.io/name=ingress-nginx ...
I1217 11:17:21.575140 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:22.075088 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:22.576460 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:23.074098 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:23.576784 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:24.143078 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:24.575020 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1217 11:17:25.075767 1350845 kapi.go:107] duration metric: took 1m13.504882856s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1217 11:17:25.077489 1350845 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-410268 cluster.
I1217 11:17:25.078739 1350845 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1217 11:17:25.080188 1350845 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1217 11:17:25.082032 1350845 out.go:179] * Enabled addons: registry-creds, inspektor-gadget, cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, default-storageclass, amd-gpu-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I1217 11:17:25.083284 1350845 addons.go:530] duration metric: took 1m24.744121732s for enable addons: enabled=[registry-creds inspektor-gadget cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns default-storageclass amd-gpu-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I1217 11:17:25.083343 1350845 start.go:247] waiting for cluster config update ...
I1217 11:17:25.083377 1350845 start.go:256] writing updated cluster config ...
I1217 11:17:25.083669 1350845 ssh_runner.go:195] Run: rm -f paused
I1217 11:17:25.089274 1350845 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1217 11:17:25.093134 1350845 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-f9dfv" in "kube-system" namespace to be "Ready" or be gone ...
I1217 11:17:25.098552 1350845 pod_ready.go:94] pod "coredns-66bc5c9577-f9dfv" is "Ready"
I1217 11:17:25.098576 1350845 pod_ready.go:86] duration metric: took 5.421914ms for pod "coredns-66bc5c9577-f9dfv" in "kube-system" namespace to be "Ready" or be gone ...
I1217 11:17:25.101079 1350845 pod_ready.go:83] waiting for pod "etcd-addons-410268" in "kube-system" namespace to be "Ready" or be gone ...
I1217 11:17:25.106432 1350845 pod_ready.go:94] pod "etcd-addons-410268" is "Ready"
I1217 11:17:25.106454 1350845 pod_ready.go:86] duration metric: took 5.356623ms for pod "etcd-addons-410268" in "kube-system" namespace to be "Ready" or be gone ...
I1217 11:17:25.108771 1350845 pod_ready.go:83] waiting for pod "kube-apiserver-addons-410268" in "kube-system" namespace to be "Ready" or be gone ...
I1217 11:17:25.113893 1350845 pod_ready.go:94] pod "kube-apiserver-addons-410268" is "Ready"
I1217 11:17:25.113911 1350845 pod_ready.go:86] duration metric: took 5.117842ms for pod "kube-apiserver-addons-410268" in "kube-system" namespace to be "Ready" or be gone ...
I1217 11:17:25.116174 1350845 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-410268" in "kube-system" namespace to be "Ready" or be gone ...
I1217 11:17:25.493720 1350845 pod_ready.go:94] pod "kube-controller-manager-addons-410268" is "Ready"
I1217 11:17:25.493753 1350845 pod_ready.go:86] duration metric: took 377.552241ms for pod "kube-controller-manager-addons-410268" in "kube-system" namespace to be "Ready" or be gone ...
I1217 11:17:25.694964 1350845 pod_ready.go:83] waiting for pod "kube-proxy-6pdv6" in "kube-system" namespace to be "Ready" or be gone ...
I1217 11:17:26.093391 1350845 pod_ready.go:94] pod "kube-proxy-6pdv6" is "Ready"
I1217 11:17:26.093422 1350845 pod_ready.go:86] duration metric: took 398.410611ms for pod "kube-proxy-6pdv6" in "kube-system" namespace to be "Ready" or be gone ...
I1217 11:17:26.293825 1350845 pod_ready.go:83] waiting for pod "kube-scheduler-addons-410268" in "kube-system" namespace to be "Ready" or be gone ...
I1217 11:17:26.693756 1350845 pod_ready.go:94] pod "kube-scheduler-addons-410268" is "Ready"
I1217 11:17:26.693783 1350845 pod_ready.go:86] duration metric: took 399.902092ms for pod "kube-scheduler-addons-410268" in "kube-system" namespace to be "Ready" or be gone ...
I1217 11:17:26.693797 1350845 pod_ready.go:40] duration metric: took 1.604488519s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1217 11:17:26.741152 1350845 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
I1217 11:17:26.742904 1350845 out.go:179] * Done! kubectl is now configured to use "addons-410268" cluster and "default" namespace by default
==> CRI-O <==
Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.882822712Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b298076f-cc1d-4195-b96d-0d3f9984a187 name=/runtime.v1.RuntimeService/ListContainers
Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.882907843Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b298076f-cc1d-4195-b96d-0d3f9984a187 name=/runtime.v1.RuntimeService/ListContainers
Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.883613635Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3388f744eafd79eead0b5cf45f82b2bb84d2d06d6d7e4a006bb805f6ece193af,PodSandboxId:e6bd9f288ebf4cc56da58c245cc18922df0dd7178151de1119656ef662963808,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765970279763976133,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8c813f3-2dd2-444d-88d8-fe297f907413,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a3d3f9b57387f33a36c34182b887ebe1682722b4962431559e27be67059c84,PodSandboxId:89babac933f5aa295026f142bf82dbc55a8133f3f75c64ecfc188492117a4d4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765970251053857675,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 89b289cf-cd57-4583-9745-2ff3ad4a62ac,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3bd861b00947ea747cba080e36feb58d620667bd535a3e091dbcc8119f2f8d,PodSandboxId:a5ad2d4e669ae7633426a272a372a1ba1aeb99b745f797d026ce3ca3157ed186,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765970239840773449,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-wnptk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: efa47144-c3a7-4842-b47a-dccdfad29fa0,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ce9723c594e95d3af7ff756ea18c604a5a5c85238726ed7f208eb8ca1fe9521a,PodSandboxId:ab1d3f4303d9ff06239d666f537ce2ad700c6da2c32df914e264ea4c0b557ce3,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765970212994447146,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xcp88,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0ea1bb28-6eb2-4e5f-a0ab-ae4ac81e953d,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe1320aa0577943f2ce261f25568a27312decbe658920f895186345ff229969,PodSandboxId:d156e490be750c3b9a5e337893c5b3a7e7bc1615b4f159398ba8ebdb1524e7b0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765970212875683522,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nfwbf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d3c5178c-0e1c-404b-a454-cd0502cb0ba6,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e71f626582a418e0ed5c8719bfd643ff314b7611415afe559dcc3f7323bb80b,PodSandboxId:90ece57d709f4ccc566a2741d429cf0bae90a9c669f1b48a5cb1fb087ae69778,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765970196708651331,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6073097f-5ea5-4564-9be4-35f9191742dc,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6935f264be567197d91ea3539a21b4ad960e18fce8f0cd2fd8a064aee0962b,PodSandboxId:0ecb8b919ad23721d145afcc276bf240c33e08867b8b11afbf7cc21919836c35,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765970177766563755,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7vz7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f6f486-f31c-465a-bbac-0cabfeabfa57,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bae8cb5b25781f10a1935e53f8d3800277b2d6f7cebc7ade9b8ef9ed6582c44,PodSandboxId:65b713e60d6e6ace420aa097c21c09e236417669978aa500826c9f51e1129455,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765970167824007229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c32b2c11-7a98-48ba-89d5-3a5e581c171b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02277316fac522833acac691e0e0cd2fe5d863294b5a9a6c9d4ce03fbcfd48f8,PodSandboxId:33825f2fd286da9301fc3ba0fcc90cbd1238b56b148a5f9e3256e0dbe31b2547,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765970161735737258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-f9dfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c65235-f139-4f33-adef-fc6ef1ccb253,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb396bc0e216641073e6dc503e2a99bad41acfc829a1f131a5d1f0fc16e232e0,PodSandboxId:40ea1b3a1351db4bb464ffe1eb4ebcab02d218ef66454fb944ac5e8fc0d98ae6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765970160807960000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6pdv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d1e053-5420-4db6-a1f6-daab3034e85c,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c48b14c60d6a45580132868fce1558152914faf87b5d0e6df6f66364e511801,PodSandboxId:1ab816366f67f435cc5cc75420205135cbead4b6941122b61878c2debaca3b89,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765970149189982033,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10e8ce73c3cc79b63688be36508c3f66,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de5475dd01a03ce3358f3c656b415b83b074580e46c6ad1a130279cec74872ab,PodSandboxId:daa48fe629c484b123b35d228ebb38cf0bae01d253e2de4d8580ac6bc280920b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765970149213819944,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cfbacbbf633ed0be3d9c6bc9784a200,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c14e25e96ab62a91146b02184edba5d238f99effa3b33a0a7fdeac0d6813524,PodSandboxId:f245ba360082c2736323673a75934b41e50a424d197fbda93c00c99e5ae0e67e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765970149170240476,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08453d2f2433d0eaf792f
305f65cd5f7,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56b63b5b38c52a74a999a91f90bca01f6ee7238bada280b7886f9a5ab521452,PodSandboxId:66b4583fa8412d7d376eb513c5676ec785a5817b40b8d53871ef9d12bbe6a8c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765970149143363619,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 746f2826e5ab144162efd3359f041e2c,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b298076f-cc1d-4195-b96d-0d3f9984a187 name=/runtime.v1.RuntimeService/ListContainers
Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.918953278Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f657c6d0-0d8e-4c07-814c-9b631f5a81fc name=/runtime.v1.RuntimeService/Version
Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.919040920Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f657c6d0-0d8e-4c07-814c-9b631f5a81fc name=/runtime.v1.RuntimeService/Version
Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.920779869Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=95387715-0800-4306-9492-9ed0e9784d36 name=/runtime.v1.ImageService/ImageFsInfo
Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.922143504Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765970421922117204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:551113,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95387715-0800-4306-9492-9ed0e9784d36 name=/runtime.v1.ImageService/ImageFsInfo
Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.923111372Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dec9b7b0-8957-4da5-9e53-980522d83e56 name=/runtime.v1.RuntimeService/ListContainers
Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.923210252Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dec9b7b0-8957-4da5-9e53-980522d83e56 name=/runtime.v1.RuntimeService/ListContainers
Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.923513517Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3388f744eafd79eead0b5cf45f82b2bb84d2d06d6d7e4a006bb805f6ece193af,PodSandboxId:e6bd9f288ebf4cc56da58c245cc18922df0dd7178151de1119656ef662963808,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765970279763976133,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8c813f3-2dd2-444d-88d8-fe297f907413,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a3d3f9b57387f33a36c34182b887ebe1682722b4962431559e27be67059c84,PodSandboxId:89babac933f5aa295026f142bf82dbc55a8133f3f75c64ecfc188492117a4d4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765970251053857675,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 89b289cf-cd57-4583-9745-2ff3ad4a62ac,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3bd861b00947ea747cba080e36feb58d620667bd535a3e091dbcc8119f2f8d,PodSandboxId:a5ad2d4e669ae7633426a272a372a1ba1aeb99b745f797d026ce3ca3157ed186,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765970239840773449,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-wnptk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: efa47144-c3a7-4842-b47a-dccdfad29fa0,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ce9723c594e95d3af7ff756ea18c604a5a5c85238726ed7f208eb8ca1fe9521a,PodSandboxId:ab1d3f4303d9ff06239d666f537ce2ad700c6da2c32df914e264ea4c0b557ce3,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765970212994447146,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xcp88,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0ea1bb28-6eb2-4e5f-a0ab-ae4ac81e953d,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe1320aa0577943f2ce261f25568a27312decbe658920f895186345ff229969,PodSandboxId:d156e490be750c3b9a5e337893c5b3a7e7bc1615b4f159398ba8ebdb1524e7b0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765970212875683522,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nfwbf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d3c5178c-0e1c-404b-a454-cd0502cb0ba6,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e71f626582a418e0ed5c8719bfd643ff314b7611415afe559dcc3f7323bb80b,PodSandboxId:90ece57d709f4ccc566a2741d429cf0bae90a9c669f1b48a5cb1fb087ae69778,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765970196708651331,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6073097f-5ea5-4564-9be4-35f9191742dc,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6935f264be567197d91ea3539a21b4ad960e18fce8f0cd2fd8a064aee0962b,PodSandboxId:0ecb8b919ad23721d145afcc276bf240c33e08867b8b11afbf7cc21919836c35,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765970177766563755,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7vz7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f6f486-f31c-465a-bbac-0cabfeabfa57,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bae8cb5b25781f10a1935e53f8d3800277b2d6f7cebc7ade9b8ef9ed6582c44,PodSandboxId:65b713e60d6e6ace420aa097c21c09e236417669978aa500826c9f51e1129455,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765970167824007229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c32b2c11-7a98-48ba-89d5-3a5e581c171b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02277316fac522833acac691e0e0cd2fe5d863294b5a9a6c9d4ce03fbcfd48f8,PodSandboxId:33825f2fd286da9301fc3ba0fcc90cbd1238b56b148a5f9e3256e0dbe31b2547,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765970161735737258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-f9dfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c65235-f139-4f33-adef-fc6ef1ccb253,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb396bc0e216641073e6dc503e2a99bad41acfc829a1f131a5d1f0fc16e232e0,PodSandboxId:40ea1b3a1351db4bb464ffe1eb4ebcab02d218ef66454fb944ac5e8fc0d98ae6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765970160807960000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6pdv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d1e053-5420-4db6-a1f6-daab3034e85c,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c48b14c60d6a45580132868fce1558152914faf87b5d0e6df6f66364e511801,PodSandboxId:1ab816366f67f435cc5cc75420205135cbead4b6941122b61878c2debaca3b89,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765970149189982033,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10e8ce73c3cc79b63688be36508c3f66,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de5475dd01a03ce3358f3c656b415b83b074580e46c6ad1a130279cec74872ab,PodSandboxId:daa48fe629c484b123b35d228ebb38cf0bae01d253e2de4d8580ac6bc280920b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765970149213819944,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cfbacbbf633ed0be3d9c6bc9784a200,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c14e25e96ab62a91146b02184edba5d238f99effa3b33a0a7fdeac0d6813524,PodSandboxId:f245ba360082c2736323673a75934b41e50a424d197fbda93c00c99e5ae0e67e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765970149170240476,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08453d2f2433d0eaf792f
305f65cd5f7,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56b63b5b38c52a74a999a91f90bca01f6ee7238bada280b7886f9a5ab521452,PodSandboxId:66b4583fa8412d7d376eb513c5676ec785a5817b40b8d53871ef9d12bbe6a8c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765970149143363619,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 746f2826e5ab144162efd3359f041e2c,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dec9b7b0-8957-4da5-9e53-980522d83e56 name=/runtime.v1.RuntimeService/ListContainers
Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.957892862Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2cc50bab-88a1-4189-bb7b-2381ad6991c9 name=/runtime.v1.RuntimeService/Version
Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.957965275Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2cc50bab-88a1-4189-bb7b-2381ad6991c9 name=/runtime.v1.RuntimeService/Version
Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.959293776Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4266b68c-eac9-4188-925d-b0d8d9cafa7e name=/runtime.v1.ImageService/ImageFsInfo
Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.960680560Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765970421960651765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:551113,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4266b68c-eac9-4188-925d-b0d8d9cafa7e name=/runtime.v1.ImageService/ImageFsInfo
Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.961588702Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e97439e2-14e8-4422-bd79-6064a4097188 name=/runtime.v1.RuntimeService/ListContainers
Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.961663770Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e97439e2-14e8-4422-bd79-6064a4097188 name=/runtime.v1.RuntimeService/ListContainers
Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.961940620Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3388f744eafd79eead0b5cf45f82b2bb84d2d06d6d7e4a006bb805f6ece193af,PodSandboxId:e6bd9f288ebf4cc56da58c245cc18922df0dd7178151de1119656ef662963808,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765970279763976133,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8c813f3-2dd2-444d-88d8-fe297f907413,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a3d3f9b57387f33a36c34182b887ebe1682722b4962431559e27be67059c84,PodSandboxId:89babac933f5aa295026f142bf82dbc55a8133f3f75c64ecfc188492117a4d4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765970251053857675,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 89b289cf-cd57-4583-9745-2ff3ad4a62ac,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3bd861b00947ea747cba080e36feb58d620667bd535a3e091dbcc8119f2f8d,PodSandboxId:a5ad2d4e669ae7633426a272a372a1ba1aeb99b745f797d026ce3ca3157ed186,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765970239840773449,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-wnptk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: efa47144-c3a7-4842-b47a-dccdfad29fa0,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ce9723c594e95d3af7ff756ea18c604a5a5c85238726ed7f208eb8ca1fe9521a,PodSandboxId:ab1d3f4303d9ff06239d666f537ce2ad700c6da2c32df914e264ea4c0b557ce3,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765970212994447146,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xcp88,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0ea1bb28-6eb2-4e5f-a0ab-ae4ac81e953d,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe1320aa0577943f2ce261f25568a27312decbe658920f895186345ff229969,PodSandboxId:d156e490be750c3b9a5e337893c5b3a7e7bc1615b4f159398ba8ebdb1524e7b0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765970212875683522,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nfwbf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d3c5178c-0e1c-404b-a454-cd0502cb0ba6,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e71f626582a418e0ed5c8719bfd643ff314b7611415afe559dcc3f7323bb80b,PodSandboxId:90ece57d709f4ccc566a2741d429cf0bae90a9c669f1b48a5cb1fb087ae69778,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765970196708651331,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6073097f-5ea5-4564-9be4-35f9191742dc,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6935f264be567197d91ea3539a21b4ad960e18fce8f0cd2fd8a064aee0962b,PodSandboxId:0ecb8b919ad23721d145afcc276bf240c33e08867b8b11afbf7cc21919836c35,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765970177766563755,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7vz7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f6f486-f31c-465a-bbac-0cabfeabfa57,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bae8cb5b25781f10a1935e53f8d3800277b2d6f7cebc7ade9b8ef9ed6582c44,PodSandboxId:65b713e60d6e6ace420aa097c21c09e236417669978aa500826c9f51e1129455,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765970167824007229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c32b2c11-7a98-48ba-89d5-3a5e581c171b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02277316fac522833acac691e0e0cd2fe5d863294b5a9a6c9d4ce03fbcfd48f8,PodSandboxId:33825f2fd286da9301fc3ba0fcc90cbd1238b56b148a5f9e3256e0dbe31b2547,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765970161735737258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-f9dfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c65235-f139-4f33-adef-fc6ef1ccb253,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb396bc0e216641073e6dc503e2a99bad41acfc829a1f131a5d1f0fc16e232e0,PodSandboxId:40ea1b3a1351db4bb464ffe1eb4ebcab02d218ef66454fb944ac5e8fc0d98ae6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765970160807960000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6pdv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d1e053-5420-4db6-a1f6-daab3034e85c,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c48b14c60d6a45580132868fce1558152914faf87b5d0e6df6f66364e511801,PodSandboxId:1ab816366f67f435cc5cc75420205135cbead4b6941122b61878c2debaca3b89,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765970149189982033,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10e8ce73c3cc79b63688be36508c3f66,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de5475dd01a03ce3358f3c656b415b83b074580e46c6ad1a130279cec74872ab,PodSandboxId:daa48fe629c484b123b35d228ebb38cf0bae01d253e2de4d8580ac6bc280920b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765970149213819944,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cfbacbbf633ed0be3d9c6bc9784a200,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c14e25e96ab62a91146b02184edba5d238f99effa3b33a0a7fdeac0d6813524,PodSandboxId:f245ba360082c2736323673a75934b41e50a424d197fbda93c00c99e5ae0e67e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765970149170240476,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08453d2f2433d0eaf792f
305f65cd5f7,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56b63b5b38c52a74a999a91f90bca01f6ee7238bada280b7886f9a5ab521452,PodSandboxId:66b4583fa8412d7d376eb513c5676ec785a5817b40b8d53871ef9d12bbe6a8c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765970149143363619,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 746f2826e5ab144162efd3359f041e2c,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e97439e2-14e8-4422-bd79-6064a4097188 name=/runtime.v1.RuntimeService/ListContainers
Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.993267981Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bfb848fc-e92c-44c6-b9d0-154674da823d name=/runtime.v1.RuntimeService/Version
Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.993368064Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bfb848fc-e92c-44c6-b9d0-154674da823d name=/runtime.v1.RuntimeService/Version
Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.994701129Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=70aaf2f8-d5b6-44b5-802d-352e2a12c445 name=/runtime.v1.ImageService/ImageFsInfo
Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.995871740Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765970421995848634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:551113,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=70aaf2f8-d5b6-44b5-802d-352e2a12c445 name=/runtime.v1.ImageService/ImageFsInfo
Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.996739659Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9be81646-2396-4828-8f43-7788c88199be name=/runtime.v1.RuntimeService/ListContainers
Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.996806392Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9be81646-2396-4828-8f43-7788c88199be name=/runtime.v1.RuntimeService/ListContainers
Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.997083935Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3388f744eafd79eead0b5cf45f82b2bb84d2d06d6d7e4a006bb805f6ece193af,PodSandboxId:e6bd9f288ebf4cc56da58c245cc18922df0dd7178151de1119656ef662963808,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765970279763976133,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8c813f3-2dd2-444d-88d8-fe297f907413,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a3d3f9b57387f33a36c34182b887ebe1682722b4962431559e27be67059c84,PodSandboxId:89babac933f5aa295026f142bf82dbc55a8133f3f75c64ecfc188492117a4d4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765970251053857675,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 89b289cf-cd57-4583-9745-2ff3ad4a62ac,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3bd861b00947ea747cba080e36feb58d620667bd535a3e091dbcc8119f2f8d,PodSandboxId:a5ad2d4e669ae7633426a272a372a1ba1aeb99b745f797d026ce3ca3157ed186,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765970239840773449,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-wnptk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: efa47144-c3a7-4842-b47a-dccdfad29fa0,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ce9723c594e95d3af7ff756ea18c604a5a5c85238726ed7f208eb8ca1fe9521a,PodSandboxId:ab1d3f4303d9ff06239d666f537ce2ad700c6da2c32df914e264ea4c0b557ce3,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765970212994447146,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xcp88,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0ea1bb28-6eb2-4e5f-a0ab-ae4ac81e953d,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe1320aa0577943f2ce261f25568a27312decbe658920f895186345ff229969,PodSandboxId:d156e490be750c3b9a5e337893c5b3a7e7bc1615b4f159398ba8ebdb1524e7b0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765970212875683522,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nfwbf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d3c5178c-0e1c-404b-a454-cd0502cb0ba6,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e71f626582a418e0ed5c8719bfd643ff314b7611415afe559dcc3f7323bb80b,PodSandboxId:90ece57d709f4ccc566a2741d429cf0bae90a9c669f1b48a5cb1fb087ae69778,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765970196708651331,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6073097f-5ea5-4564-9be4-35f9191742dc,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6935f264be567197d91ea3539a21b4ad960e18fce8f0cd2fd8a064aee0962b,PodSandboxId:0ecb8b919ad23721d145afcc276bf240c33e08867b8b11afbf7cc21919836c35,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765970177766563755,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7vz7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f6f486-f31c-465a-bbac-0cabfeabfa57,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bae8cb5b25781f10a1935e53f8d3800277b2d6f7cebc7ade9b8ef9ed6582c44,PodSandboxId:65b713e60d6e6ace420aa097c21c09e236417669978aa500826c9f51e1129455,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765970167824007229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c32b2c11-7a98-48ba-89d5-3a5e581c171b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02277316fac522833acac691e0e0cd2fe5d863294b5a9a6c9d4ce03fbcfd48f8,PodSandboxId:33825f2fd286da9301fc3ba0fcc90cbd1238b56b148a5f9e3256e0dbe31b2547,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765970161735737258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-f9dfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c65235-f139-4f33-adef-fc6ef1ccb253,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb396bc0e216641073e6dc503e2a99bad41acfc829a1f131a5d1f0fc16e232e0,PodSandboxId:40ea1b3a1351db4bb464ffe1eb4ebcab02d218ef66454fb944ac5e8fc0d98ae6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765970160807960000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6pdv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d1e053-5420-4db6-a1f6-daab3034e85c,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c48b14c60d6a45580132868fce1558152914faf87b5d0e6df6f66364e511801,PodSandboxId:1ab816366f67f435cc5cc75420205135cbead4b6941122b61878c2debaca3b89,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765970149189982033,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10e8ce73c3cc79b63688be36508c3f66,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de5475dd01a03ce3358f3c656b415b83b074580e46c6ad1a130279cec74872ab,PodSandboxId:daa48fe629c484b123b35d228ebb38cf0bae01d253e2de4d8580ac6bc280920b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765970149213819944,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cfbacbbf633ed0be3d9c6bc9784a200,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c14e25e96ab62a91146b02184edba5d238f99effa3b33a0a7fdeac0d6813524,PodSandboxId:f245ba360082c2736323673a75934b41e50a424d197fbda93c00c99e5ae0e67e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765970149170240476,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08453d2f2433d0eaf792f
305f65cd5f7,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56b63b5b38c52a74a999a91f90bca01f6ee7238bada280b7886f9a5ab521452,PodSandboxId:66b4583fa8412d7d376eb513c5676ec785a5817b40b8d53871ef9d12bbe6a8c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765970149143363619,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 746f2826e5ab144162efd3359f041e2c,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9be81646-2396-4828-8f43-7788c88199be name=/runtime.v1.RuntimeService/ListContainers
Dec 17 11:20:22 addons-410268 crio[813]: time="2025-12-17 11:20:22.018608345Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/1.0" file="docker/docker_client.go:631"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
3388f744eafd7 public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff 2 minutes ago Running nginx 0 e6bd9f288ebf4 nginx default
e5a3d3f9b5738 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 2 minutes ago Running busybox 0 89babac933f5a busybox default
2b3bd861b0094 registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad 3 minutes ago Running controller 0 a5ad2d4e669ae ingress-nginx-controller-85d4c799dd-wnptk ingress-nginx
ce9723c594e95 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285 3 minutes ago Exited patch 0 ab1d3f4303d9f ingress-nginx-admission-patch-xcp88 ingress-nginx
1fe1320aa0577 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285 3 minutes ago Exited create 0 d156e490be750 ingress-nginx-admission-create-nfwbf ingress-nginx
9e71f626582a4 docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 3 minutes ago Running minikube-ingress-dns 0 90ece57d709f4 kube-ingress-dns-minikube kube-system
7c6935f264be5 docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 0ecb8b919ad23 amd-gpu-device-plugin-7vz7s kube-system
2bae8cb5b2578 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 65b713e60d6e6 storage-provisioner kube-system
02277316fac52 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969 4 minutes ago Running coredns 0 33825f2fd286d coredns-66bc5c9577-f9dfv kube-system
cb396bc0e2166 36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691 4 minutes ago Running kube-proxy 0 40ea1b3a1351d kube-proxy-6pdv6 kube-system
de5475dd01a03 aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78 4 minutes ago Running kube-scheduler 0 daa48fe629c48 kube-scheduler-addons-410268 kube-system
9c48b14c60d6a a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1 4 minutes ago Running etcd 0 1ab816366f67f etcd-addons-410268 kube-system
2c14e25e96ab6 5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942 4 minutes ago Running kube-controller-manager 0 f245ba360082c kube-controller-manager-addons-410268 kube-system
e56b63b5b38c5 aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c 4 minutes ago Running kube-apiserver 0 66b4583fa8412 kube-apiserver-addons-410268 kube-system
==> coredns [02277316fac522833acac691e0e0cd2fe5d863294b5a9a6c9d4ce03fbcfd48f8] <==
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
[INFO] Reloading complete
[INFO] 10.244.0.23:42695 - 32588 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000452699s
[INFO] 10.244.0.23:36435 - 64179 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00013891s
[INFO] 10.244.0.23:56108 - 42822 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000219564s
[INFO] 10.244.0.23:57650 - 38552 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000084398s
[INFO] 10.244.0.23:49609 - 4240 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00016395s
[INFO] 10.244.0.23:35563 - 18123 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000334952s
[INFO] 10.244.0.23:60193 - 19299 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001622887s
[INFO] 10.244.0.23:48643 - 25052 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003869549s
[INFO] 10.244.0.27:49832 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000238957s
[INFO] 10.244.0.27:37521 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000157504s
==> describe nodes <==
Name: addons-410268
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-410268
kubernetes.io/os=linux
minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
minikube.k8s.io/name=addons-410268
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_17T11_15_55_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-410268
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 17 Dec 2025 11:15:52 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-410268
AcquireTime: <unset>
RenewTime: Wed, 17 Dec 2025 11:20:19 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 17 Dec 2025 11:18:27 +0000 Wed, 17 Dec 2025 11:15:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 17 Dec 2025 11:18:27 +0000 Wed, 17 Dec 2025 11:15:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 17 Dec 2025 11:18:27 +0000 Wed, 17 Dec 2025 11:15:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 17 Dec 2025 11:18:27 +0000 Wed, 17 Dec 2025 11:15:55 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.28
Hostname: addons-410268
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001796Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001796Ki
pods: 110
System Info:
Machine ID: 7773aa7269d04e148c7e331a57e11558
System UUID: 7773aa72-69d0-4e14-8c7e-331a57e11558
Boot ID: 3b845b4b-5fae-44f0-b3f6-c52161226314
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.34.3
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (13 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m55s
default hello-world-app-5d498dc89-btq58 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m28s
ingress-nginx ingress-nginx-controller-85d4c799dd-wnptk 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m15s
kube-system amd-gpu-device-plugin-7vz7s 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m19s
kube-system coredns-66bc5c9577-f9dfv 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m22s
kube-system etcd-addons-410268 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m28s
kube-system kube-apiserver-addons-410268 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m28s
kube-system kube-controller-manager-addons-410268 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m28s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m16s
kube-system kube-proxy-6pdv6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m22s
kube-system kube-scheduler-addons-410268 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m28s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m16s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m20s kube-proxy
Normal NodeHasSufficientMemory 4m34s (x8 over 4m34s) kubelet Node addons-410268 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m34s (x8 over 4m34s) kubelet Node addons-410268 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m34s (x7 over 4m34s) kubelet Node addons-410268 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m34s kubelet Updated Node Allocatable limit across pods
Normal Starting 4m28s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m28s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m28s kubelet Node addons-410268 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m28s kubelet Node addons-410268 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m28s kubelet Node addons-410268 status is now: NodeHasSufficientPID
Normal NodeReady 4m27s kubelet Node addons-410268 status is now: NodeReady
Normal RegisteredNode 4m23s node-controller Node addons-410268 event: Registered Node addons-410268 in Controller
==> dmesg <==
[ +0.048223] kauditd_printk_skb: 405 callbacks suppressed
[ +2.961015] kauditd_printk_skb: 293 callbacks suppressed
[ +6.055211] kauditd_printk_skb: 5 callbacks suppressed
[ +12.879282] kauditd_printk_skb: 32 callbacks suppressed
[ +8.872741] kauditd_printk_skb: 26 callbacks suppressed
[ +5.153815] kauditd_printk_skb: 107 callbacks suppressed
[ +1.010193] kauditd_printk_skb: 73 callbacks suppressed
[Dec17 11:17] kauditd_printk_skb: 49 callbacks suppressed
[ +5.470482] kauditd_printk_skb: 65 callbacks suppressed
[ +0.000054] kauditd_printk_skb: 96 callbacks suppressed
[ +1.750391] kauditd_printk_skb: 65 callbacks suppressed
[ +7.105559] kauditd_printk_skb: 32 callbacks suppressed
[ +3.424473] kauditd_printk_skb: 47 callbacks suppressed
[ +10.571002] kauditd_printk_skb: 17 callbacks suppressed
[ +5.861369] kauditd_printk_skb: 22 callbacks suppressed
[ +4.636061] kauditd_printk_skb: 38 callbacks suppressed
[ +1.671797] kauditd_printk_skb: 141 callbacks suppressed
[Dec17 11:18] kauditd_printk_skb: 77 callbacks suppressed
[ +1.608255] kauditd_printk_skb: 167 callbacks suppressed
[ +2.906072] kauditd_printk_skb: 78 callbacks suppressed
[ +2.311565] kauditd_printk_skb: 128 callbacks suppressed
[ +0.000026] kauditd_printk_skb: 10 callbacks suppressed
[ +6.854772] kauditd_printk_skb: 41 callbacks suppressed
[ +3.449303] kauditd_printk_skb: 127 callbacks suppressed
[Dec17 11:20] kauditd_printk_skb: 10 callbacks suppressed
==> etcd [9c48b14c60d6a45580132868fce1558152914faf87b5d0e6df6f66364e511801] <==
{"level":"warn","ts":"2025-12-17T11:16:52.742399Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"162.16272ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-17T11:16:52.742417Z","caller":"traceutil/trace.go:172","msg":"trace[1694931419] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1021; }","duration":"162.182119ms","start":"2025-12-17T11:16:52.580230Z","end":"2025-12-17T11:16:52.742412Z","steps":["trace[1694931419] 'agreement among raft nodes before linearized reading' (duration: 162.156632ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-17T11:16:52.743348Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T11:16:52.366117Z","time spent":"375.795334ms","remote":"127.0.0.1:56468","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":9227,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/gadget/gadget-2nlrj\" mod_revision:1001 > success:<request_put:<key:\"/registry/pods/gadget/gadget-2nlrj\" value_size:9185 >> failure:<request_range:<key:\"/registry/pods/gadget/gadget-2nlrj\" > >"}
{"level":"info","ts":"2025-12-17T11:17:01.376521Z","caller":"traceutil/trace.go:172","msg":"trace[1767755788] transaction","detail":"{read_only:false; response_revision:1077; number_of_response:1; }","duration":"133.166443ms","start":"2025-12-17T11:17:01.243340Z","end":"2025-12-17T11:17:01.376506Z","steps":["trace[1767755788] 'process raft request' (duration: 133.062306ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-17T11:17:05.669342Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"276.507343ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicies\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-17T11:17:05.669399Z","caller":"traceutil/trace.go:172","msg":"trace[188952421] range","detail":"{range_begin:/registry/validatingadmissionpolicies; range_end:; response_count:0; response_revision:1093; }","duration":"276.574137ms","start":"2025-12-17T11:17:05.392812Z","end":"2025-12-17T11:17:05.669386Z","steps":["trace[188952421] 'range keys from in-memory index tree' (duration: 275.848761ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T11:17:15.210459Z","caller":"traceutil/trace.go:172","msg":"trace[716277579] linearizableReadLoop","detail":"{readStateIndex:1189; appliedIndex:1189; }","duration":"203.790448ms","start":"2025-12-17T11:17:15.006652Z","end":"2025-12-17T11:17:15.210443Z","steps":["trace[716277579] 'read index received' (duration: 203.785757ms)","trace[716277579] 'applied index is now lower than readState.Index' (duration: 3.91µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-17T11:17:15.210576Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"203.909151ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-17T11:17:15.210596Z","caller":"traceutil/trace.go:172","msg":"trace[1277096382] range","detail":"{range_begin:/registry/podtemplates; range_end:; response_count:0; response_revision:1161; }","duration":"203.941291ms","start":"2025-12-17T11:17:15.006648Z","end":"2025-12-17T11:17:15.210589Z","steps":["trace[1277096382] 'agreement among raft nodes before linearized reading' (duration: 203.879804ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-17T11:17:15.210930Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"140.62392ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-17T11:17:15.210954Z","caller":"traceutil/trace.go:172","msg":"trace[1347680689] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1162; }","duration":"140.653677ms","start":"2025-12-17T11:17:15.070294Z","end":"2025-12-17T11:17:15.210947Z","steps":["trace[1347680689] 'agreement among raft nodes before linearized reading' (duration: 140.607399ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T11:17:15.211212Z","caller":"traceutil/trace.go:172","msg":"trace[902206238] transaction","detail":"{read_only:false; response_revision:1162; number_of_response:1; }","duration":"206.399719ms","start":"2025-12-17T11:17:15.004801Z","end":"2025-12-17T11:17:15.211201Z","steps":["trace[902206238] 'process raft request' (duration: 205.985998ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T11:17:18.245932Z","caller":"traceutil/trace.go:172","msg":"trace[1930351079] linearizableReadLoop","detail":"{readStateIndex:1197; appliedIndex:1197; }","duration":"176.245604ms","start":"2025-12-17T11:17:18.069671Z","end":"2025-12-17T11:17:18.245916Z","steps":["trace[1930351079] 'read index received' (duration: 176.241554ms)","trace[1930351079] 'applied index is now lower than readState.Index' (duration: 3.353µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-17T11:17:18.246048Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"176.362128ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-17T11:17:18.246065Z","caller":"traceutil/trace.go:172","msg":"trace[528151282] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1169; }","duration":"176.393264ms","start":"2025-12-17T11:17:18.069667Z","end":"2025-12-17T11:17:18.246061Z","steps":["trace[528151282] 'agreement among raft nodes before linearized reading' (duration: 176.330722ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T11:17:18.246779Z","caller":"traceutil/trace.go:172","msg":"trace[421943312] transaction","detail":"{read_only:false; response_revision:1170; number_of_response:1; }","duration":"201.392049ms","start":"2025-12-17T11:17:18.045347Z","end":"2025-12-17T11:17:18.246739Z","steps":["trace[421943312] 'process raft request' (duration: 200.867739ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T11:17:52.594334Z","caller":"traceutil/trace.go:172","msg":"trace[1020797586] linearizableReadLoop","detail":"{readStateIndex:1396; appliedIndex:1396; }","duration":"122.428543ms","start":"2025-12-17T11:17:52.471884Z","end":"2025-12-17T11:17:52.594312Z","steps":["trace[1020797586] 'read index received' (duration: 122.420607ms)","trace[1020797586] 'applied index is now lower than readState.Index' (duration: 7µs)"],"step_count":2}
{"level":"info","ts":"2025-12-17T11:17:52.594454Z","caller":"traceutil/trace.go:172","msg":"trace[1302503521] transaction","detail":"{read_only:false; response_revision:1361; number_of_response:1; }","duration":"149.808704ms","start":"2025-12-17T11:17:52.444635Z","end":"2025-12-17T11:17:52.594443Z","steps":["trace[1302503521] 'process raft request' (duration: 149.70749ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-17T11:17:52.594493Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.591887ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-17T11:17:52.594517Z","caller":"traceutil/trace.go:172","msg":"trace[504402566] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1361; }","duration":"122.630694ms","start":"2025-12-17T11:17:52.471880Z","end":"2025-12-17T11:17:52.594511Z","steps":["trace[504402566] 'agreement among raft nodes before linearized reading' (duration: 122.560785ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T11:17:53.822275Z","caller":"traceutil/trace.go:172","msg":"trace[1783343896] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1386; }","duration":"231.03912ms","start":"2025-12-17T11:17:53.591221Z","end":"2025-12-17T11:17:53.822260Z","steps":["trace[1783343896] 'process raft request' (duration: 230.860709ms)"],"step_count":1}
{"level":"info","ts":"2025-12-17T11:17:58.897335Z","caller":"traceutil/trace.go:172","msg":"trace[1654990130] linearizableReadLoop","detail":"{readStateIndex:1471; appliedIndex:1471; }","duration":"191.648246ms","start":"2025-12-17T11:17:58.705668Z","end":"2025-12-17T11:17:58.897316Z","steps":["trace[1654990130] 'read index received' (duration: 191.639817ms)","trace[1654990130] 'applied index is now lower than readState.Index' (duration: 7.277µs)"],"step_count":2}
{"level":"info","ts":"2025-12-17T11:17:58.897711Z","caller":"traceutil/trace.go:172","msg":"trace[1494870268] transaction","detail":"{read_only:false; response_revision:1434; number_of_response:1; }","duration":"274.432867ms","start":"2025-12-17T11:17:58.623267Z","end":"2025-12-17T11:17:58.897700Z","steps":["trace[1494870268] 'process raft request' (duration: 274.324035ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-17T11:17:58.897700Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"192.024025ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gadget\" limit:1 ","response":"range_response_count:1 size:573"}
{"level":"info","ts":"2025-12-17T11:17:58.897765Z","caller":"traceutil/trace.go:172","msg":"trace[1986685695] range","detail":"{range_begin:/registry/namespaces/gadget; range_end:; response_count:1; response_revision:1433; }","duration":"192.105457ms","start":"2025-12-17T11:17:58.705648Z","end":"2025-12-17T11:17:58.897754Z","steps":["trace[1986685695] 'agreement among raft nodes before linearized reading' (duration: 191.747861ms)"],"step_count":1}
==> kernel <==
11:20:22 up 4 min, 0 users, load average: 0.50, 1.04, 0.53
Linux addons-410268 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Dec 16 03:41:16 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [e56b63b5b38c52a74a999a91f90bca01f6ee7238bada280b7886f9a5ab521452] <==
E1217 11:17:00.154665 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.205.218:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.205.218:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.205.218:443: connect: connection refused" logger="UnhandledError"
E1217 11:17:00.175589 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.205.218:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.205.218:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.205.218:443: connect: connection refused" logger="UnhandledError"
E1217 11:17:00.217107 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.205.218:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.205.218:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.205.218:443: connect: connection refused" logger="UnhandledError"
E1217 11:17:00.298835 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.205.218:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.205.218:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.205.218:443: connect: connection refused" logger="UnhandledError"
I1217 11:17:00.528021 1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E1217 11:17:38.500147 1 conn.go:339] Error on socket receive: read tcp 192.168.39.28:8443->192.168.39.1:50194: use of closed network connection
E1217 11:17:38.687963 1 conn.go:339] Error on socket receive: read tcp 192.168.39.28:8443->192.168.39.1:50232: use of closed network connection
I1217 11:17:47.757622 1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.136.141"}
I1217 11:17:54.180834 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
I1217 11:17:54.389276 1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.53.216"}
I1217 11:18:01.192462 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I1217 11:18:16.503066 1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
E1217 11:18:35.304069 1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
I1217 11:18:44.882023 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1217 11:18:44.885421 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1217 11:18:44.919637 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1217 11:18:44.919686 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1217 11:18:44.953754 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1217 11:18:44.953808 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1217 11:18:45.007788 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1217 11:18:45.007838 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W1217 11:18:45.919783 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W1217 11:18:46.008455 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W1217 11:18:46.022256 1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
I1217 11:20:20.969982 1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.106.1"}
==> kube-controller-manager [2c14e25e96ab62a91146b02184edba5d238f99effa3b33a0a7fdeac0d6813524] <==
E1217 11:18:55.047873 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
I1217 11:18:59.208223 1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
I1217 11:18:59.208270 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1217 11:18:59.286980 1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
I1217 11:18:59.287039 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
E1217 11:18:59.790669 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 11:18:59.791736 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 11:19:01.395546 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 11:19:01.396707 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 11:19:02.677324 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 11:19:02.678341 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 11:19:13.072480 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 11:19:13.073537 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 11:19:14.990560 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 11:19:14.991525 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 11:19:20.410474 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 11:19:20.411409 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 11:19:44.019718 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 11:19:44.021136 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 11:19:46.025818 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 11:19:46.026859 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 11:19:47.235594 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 11:19:47.236568 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1217 11:20:16.629032 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1217 11:20:16.629949 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
==> kube-proxy [cb396bc0e216641073e6dc503e2a99bad41acfc829a1f131a5d1f0fc16e232e0] <==
I1217 11:16:01.320650 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1217 11:16:01.423767 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1217 11:16:01.423896 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.28"]
E1217 11:16:01.424229 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1217 11:16:01.651698 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1217 11:16:01.651749 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1217 11:16:01.651777 1 server_linux.go:132] "Using iptables Proxier"
I1217 11:16:01.687578 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1217 11:16:01.688388 1 server.go:527] "Version info" version="v1.34.3"
I1217 11:16:01.688402 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1217 11:16:01.693470 1 config.go:200] "Starting service config controller"
I1217 11:16:01.693482 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1217 11:16:01.698899 1 config.go:106] "Starting endpoint slice config controller"
I1217 11:16:01.698918 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1217 11:16:01.699635 1 config.go:403] "Starting serviceCIDR config controller"
I1217 11:16:01.699644 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1217 11:16:01.700775 1 config.go:309] "Starting node config controller"
I1217 11:16:01.700786 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1217 11:16:01.700792 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1217 11:16:01.794076 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1217 11:16:01.801337 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1217 11:16:01.801462 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-scheduler [de5475dd01a03ce3358f3c656b415b83b074580e46c6ad1a130279cec74872ab] <==
E1217 11:15:52.141286 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1217 11:15:52.140815 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1217 11:15:52.141108 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1217 11:15:52.141473 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1217 11:15:52.141537 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1217 11:15:52.140508 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1217 11:15:52.943654 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1217 11:15:52.957604 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1217 11:15:53.007599 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1217 11:15:53.024263 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1217 11:15:53.066280 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1217 11:15:53.130106 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1217 11:15:53.156953 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1217 11:15:53.157824 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1217 11:15:53.176905 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1217 11:15:53.213603 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1217 11:15:53.216208 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1217 11:15:53.230760 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1217 11:15:53.231015 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1217 11:15:53.275220 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1217 11:15:53.338533 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1217 11:15:53.492656 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1217 11:15:53.585005 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1217 11:15:53.641273 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
I1217 11:15:56.215290 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Dec 17 11:18:54 addons-410268 kubelet[1508]: E1217 11:18:54.973365 1508 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765970334972725186 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 11:18:54 addons-410268 kubelet[1508]: E1217 11:18:54.973416 1508 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765970334972725186 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 11:18:55 addons-410268 kubelet[1508]: I1217 11:18:55.929150 1508 scope.go:117] "RemoveContainer" containerID="59110e977b9687b2a3d792445201c8d82a14bb0a58e61b29a5fe8c7ff8eebccc"
Dec 17 11:18:56 addons-410268 kubelet[1508]: I1217 11:18:56.043881 1508 scope.go:117] "RemoveContainer" containerID="4fb9f07ee8c9761b02b16fc9f8e32457819829a5b1c4d3f927a859516dff11a6"
Dec 17 11:18:56 addons-410268 kubelet[1508]: I1217 11:18:56.160529 1508 scope.go:117] "RemoveContainer" containerID="b0a987b71e3f8b728e69412e0813b14b1aed68373816135e5ce5e62cd003576d"
Dec 17 11:18:59 addons-410268 kubelet[1508]: I1217 11:18:59.857597 1508 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Dec 17 11:19:04 addons-410268 kubelet[1508]: E1217 11:19:04.976896 1508 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765970344976136753 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 11:19:04 addons-410268 kubelet[1508]: E1217 11:19:04.977021 1508 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765970344976136753 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 11:19:09 addons-410268 kubelet[1508]: I1217 11:19:09.857136 1508 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-7vz7s" secret="" err="secret \"gcp-auth\" not found"
Dec 17 11:19:14 addons-410268 kubelet[1508]: E1217 11:19:14.980370 1508 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765970354979499983 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 11:19:14 addons-410268 kubelet[1508]: E1217 11:19:14.980408 1508 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765970354979499983 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 11:19:24 addons-410268 kubelet[1508]: E1217 11:19:24.983057 1508 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765970364982711518 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 11:19:24 addons-410268 kubelet[1508]: E1217 11:19:24.983356 1508 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765970364982711518 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 11:19:34 addons-410268 kubelet[1508]: E1217 11:19:34.989462 1508 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765970374988840559 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 11:19:34 addons-410268 kubelet[1508]: E1217 11:19:34.989512 1508 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765970374988840559 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 11:19:44 addons-410268 kubelet[1508]: E1217 11:19:44.991921 1508 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765970384991330257 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 11:19:44 addons-410268 kubelet[1508]: E1217 11:19:44.991961 1508 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765970384991330257 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 11:19:54 addons-410268 kubelet[1508]: E1217 11:19:54.995311 1508 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765970394994869269 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 11:19:54 addons-410268 kubelet[1508]: E1217 11:19:54.995349 1508 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765970394994869269 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 11:20:04 addons-410268 kubelet[1508]: E1217 11:20:04.998061 1508 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765970404997700538 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 11:20:04 addons-410268 kubelet[1508]: E1217 11:20:04.998083 1508 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765970404997700538 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 11:20:15 addons-410268 kubelet[1508]: E1217 11:20:15.000438 1508 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765970414999795069 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 11:20:15 addons-410268 kubelet[1508]: E1217 11:20:15.000460 1508 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765970414999795069 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:551113} inodes_used:{value:196}}"
Dec 17 11:20:18 addons-410268 kubelet[1508]: I1217 11:20:18.857508 1508 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-7vz7s" secret="" err="secret \"gcp-auth\" not found"
Dec 17 11:20:21 addons-410268 kubelet[1508]: I1217 11:20:21.034399 1508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljvn7\" (UniqueName: \"kubernetes.io/projected/d97fa524-2409-4372-8cad-5cf2b4b55c48-kube-api-access-ljvn7\") pod \"hello-world-app-5d498dc89-btq58\" (UID: \"d97fa524-2409-4372-8cad-5cf2b4b55c48\") " pod="default/hello-world-app-5d498dc89-btq58"
==> storage-provisioner [2bae8cb5b25781f10a1935e53f8d3800277b2d6f7cebc7ade9b8ef9ed6582c44] <==
W1217 11:19:57.511194 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 11:19:59.514892 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 11:19:59.520892 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 11:20:01.523934 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 11:20:01.528870 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 11:20:03.532001 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 11:20:03.540481 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 11:20:05.544034 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 11:20:05.548614 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 11:20:07.551945 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 11:20:07.558795 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 11:20:09.561722 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 11:20:09.566790 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 11:20:11.570534 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 11:20:11.577603 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 11:20:13.582814 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 11:20:13.589249 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 11:20:15.591829 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 11:20:15.598700 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 11:20:17.602415 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 11:20:17.608139 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 11:20:19.611763 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 11:20:19.616723 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 11:20:21.620739 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 11:20:21.627680 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-410268 -n addons-410268
helpers_test.go:270: (dbg) Run: kubectl --context addons-410268 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-btq58 ingress-nginx-admission-create-nfwbf ingress-nginx-admission-patch-xcp88
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run: kubectl --context addons-410268 describe pod hello-world-app-5d498dc89-btq58 ingress-nginx-admission-create-nfwbf ingress-nginx-admission-patch-xcp88
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-410268 describe pod hello-world-app-5d498dc89-btq58 ingress-nginx-admission-create-nfwbf ingress-nginx-admission-patch-xcp88: exit status 1 (73.213757ms)
-- stdout --
Name: hello-world-app-5d498dc89-btq58
Namespace: default
Priority: 0
Service Account: default
Node: addons-410268/192.168.39.28
Start Time: Wed, 17 Dec 2025 11:20:20 +0000
Labels: app=hello-world-app
pod-template-hash=5d498dc89
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-world-app-5d498dc89
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ljvn7 (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-ljvn7:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3s default-scheduler Successfully assigned default/hello-world-app-5d498dc89-btq58 to addons-410268
Normal Pulling 2s kubelet Pulling image "docker.io/kicbase/echo-server:1.0"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-nfwbf" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-xcp88" not found
** /stderr **
helpers_test.go:288: kubectl --context addons-410268 describe pod hello-world-app-5d498dc89-btq58 ingress-nginx-admission-create-nfwbf ingress-nginx-admission-patch-xcp88: exit status 1
addons_test.go:1055: (dbg) Run: out/minikube-linux-amd64 -p addons-410268 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-410268 addons disable ingress-dns --alsologtostderr -v=1: (1.051663954s)
addons_test.go:1055: (dbg) Run: out/minikube-linux-amd64 -p addons-410268 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-410268 addons disable ingress --alsologtostderr -v=1: (7.732583738s)
--- FAIL: TestAddons/parallel/Ingress (157.90s)