=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run: kubectl --context addons-917695 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run: kubectl --context addons-917695 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run: kubectl --context addons-917695 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [01c1c75f-6820-4ed0-adec-927c0fe8b534] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [01c1c75f-6820-4ed0-adec-927c0fe8b534] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.003795786s
I1213 08:32:50.965234 9697 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run: out/minikube-linux-amd64 -p addons-917695 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-917695 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.804116547s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run: kubectl --context addons-917695 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run: out/minikube-linux-amd64 -p addons-917695 ip
addons_test.go:301: (dbg) Run: nslookup hello-john.test 192.168.39.154
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-917695 -n addons-917695
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-amd64 -p addons-917695 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-917695 logs -n 25: (1.231207741s)
helpers_test.go:261: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ delete │ -p download-only-433374 │ download-only-433374 │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ 13 Dec 25 08:29 UTC │
│ start │ --download-only -p binary-mirror-067349 --alsologtostderr --binary-mirror http://127.0.0.1:40107 --driver=kvm2 --container-runtime=crio │ binary-mirror-067349 │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ │
│ delete │ -p binary-mirror-067349 │ binary-mirror-067349 │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ 13 Dec 25 08:29 UTC │
│ addons │ disable dashboard -p addons-917695 │ addons-917695 │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ │
│ addons │ enable dashboard -p addons-917695 │ addons-917695 │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ │
│ start │ -p addons-917695 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-917695 │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ 13 Dec 25 08:31 UTC │
│ addons │ addons-917695 addons disable volcano --alsologtostderr -v=1 │ addons-917695 │ jenkins │ v1.37.0 │ 13 Dec 25 08:31 UTC │ 13 Dec 25 08:31 UTC │
│ addons │ addons-917695 addons disable gcp-auth --alsologtostderr -v=1 │ addons-917695 │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ 13 Dec 25 08:32 UTC │
│ addons │ enable headlamp -p addons-917695 --alsologtostderr -v=1 │ addons-917695 │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ 13 Dec 25 08:32 UTC │
│ addons │ addons-917695 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-917695 │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ 13 Dec 25 08:32 UTC │
│ ssh │ addons-917695 ssh cat /opt/local-path-provisioner/pvc-e8937d4d-4320-4d8c-b491-c79dee89d1bb_default_test-pvc/file1 │ addons-917695 │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ 13 Dec 25 08:32 UTC │
│ addons │ addons-917695 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-917695 │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ 13 Dec 25 08:33 UTC │
│ ip │ addons-917695 ip │ addons-917695 │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ 13 Dec 25 08:32 UTC │
│ addons │ addons-917695 addons disable registry --alsologtostderr -v=1 │ addons-917695 │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ 13 Dec 25 08:32 UTC │
│ addons │ addons-917695 addons disable headlamp --alsologtostderr -v=1 │ addons-917695 │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ 13 Dec 25 08:32 UTC │
│ addons │ addons-917695 addons disable metrics-server --alsologtostderr -v=1 │ addons-917695 │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ 13 Dec 25 08:32 UTC │
│ addons │ addons-917695 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-917695 │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ 13 Dec 25 08:32 UTC │
│ ssh │ addons-917695 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-917695 │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-917695 │ addons-917695 │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ 13 Dec 25 08:32 UTC │
│ addons │ addons-917695 addons disable registry-creds --alsologtostderr -v=1 │ addons-917695 │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ 13 Dec 25 08:32 UTC │
│ addons │ addons-917695 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-917695 │ jenkins │ v1.37.0 │ 13 Dec 25 08:32 UTC │ 13 Dec 25 08:33 UTC │
│ addons │ addons-917695 addons disable yakd --alsologtostderr -v=1 │ addons-917695 │ jenkins │ v1.37.0 │ 13 Dec 25 08:33 UTC │ 13 Dec 25 08:33 UTC │
│ addons │ addons-917695 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-917695 │ jenkins │ v1.37.0 │ 13 Dec 25 08:33 UTC │ 13 Dec 25 08:33 UTC │
│ addons │ addons-917695 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-917695 │ jenkins │ v1.37.0 │ 13 Dec 25 08:33 UTC │ 13 Dec 25 08:33 UTC │
│ ip │ addons-917695 ip │ addons-917695 │ jenkins │ v1.37.0 │ 13 Dec 25 08:35 UTC │ 13 Dec 25 08:35 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/13 08:29:43
Running on machine: ubuntu-20-agent-8
Binary: Built with gc go1.25.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1213 08:29:43.910619 10611 out.go:360] Setting OutFile to fd 1 ...
I1213 08:29:43.910714 10611 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:29:43.910718 10611 out.go:374] Setting ErrFile to fd 2...
I1213 08:29:43.910722 10611 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:29:43.910901 10611 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5761/.minikube/bin
I1213 08:29:43.911413 10611 out.go:368] Setting JSON to false
I1213 08:29:43.912327 10611 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":728,"bootTime":1765613856,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1213 08:29:43.912382 10611 start.go:143] virtualization: kvm guest
I1213 08:29:43.914633 10611 out.go:179] * [addons-917695] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1213 08:29:43.916044 10611 notify.go:221] Checking for updates...
I1213 08:29:43.916100 10611 out.go:179] - MINIKUBE_LOCATION=22128
I1213 08:29:43.917514 10611 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1213 08:29:43.919038 10611 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22128-5761/kubeconfig
I1213 08:29:43.920410 10611 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5761/.minikube
I1213 08:29:43.921842 10611 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1213 08:29:43.923389 10611 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1213 08:29:43.925026 10611 driver.go:422] Setting default libvirt URI to qemu:///system
I1213 08:29:43.956336 10611 out.go:179] * Using the kvm2 driver based on user configuration
I1213 08:29:43.957633 10611 start.go:309] selected driver: kvm2
I1213 08:29:43.957648 10611 start.go:927] validating driver "kvm2" against <nil>
I1213 08:29:43.957663 10611 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1213 08:29:43.958400 10611 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1213 08:29:43.958638 10611 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1213 08:29:43.958664 10611 cni.go:84] Creating CNI manager for ""
I1213 08:29:43.958720 10611 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1213 08:29:43.958731 10611 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1213 08:29:43.958792 10611 start.go:353] cluster config:
{Name:addons-917695 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-917695 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
I1213 08:29:43.958908 10611 iso.go:125] acquiring lock: {Name:mk6cfae0203e3172b0791a477e21fba41da25205 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1213 08:29:43.960638 10611 out.go:179] * Starting "addons-917695" primary control-plane node in "addons-917695" cluster
I1213 08:29:43.962221 10611 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1213 08:29:43.962256 10611 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
I1213 08:29:43.962278 10611 cache.go:65] Caching tarball of preloaded images
I1213 08:29:43.962404 10611 preload.go:238] Found /home/jenkins/minikube-integration/22128-5761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1213 08:29:43.962419 10611 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
I1213 08:29:43.962760 10611 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/config.json ...
I1213 08:29:43.962789 10611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/config.json: {Name:mkec48c10906261e97c7f0e36ada6310ae865811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 08:29:43.962936 10611 start.go:360] acquireMachinesLock for addons-917695: {Name:mk6c8e990a56a1510f4ba4283e9407bcc2a7ff5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1213 08:29:43.963000 10611 start.go:364] duration metric: took 48.605µs to acquireMachinesLock for "addons-917695"
I1213 08:29:43.963023 10611 start.go:93] Provisioning new machine with config: &{Name:addons-917695 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-917695 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1213 08:29:43.963095 10611 start.go:125] createHost starting for "" (driver="kvm2")
I1213 08:29:43.964935 10611 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1213 08:29:43.965087 10611 start.go:159] libmachine.API.Create for "addons-917695" (driver="kvm2")
I1213 08:29:43.965120 10611 client.go:173] LocalClient.Create starting
I1213 08:29:43.965210 10611 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22128-5761/.minikube/certs/ca.pem
I1213 08:29:44.104919 10611 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22128-5761/.minikube/certs/cert.pem
I1213 08:29:44.172615 10611 main.go:143] libmachine: creating domain...
I1213 08:29:44.172634 10611 main.go:143] libmachine: creating network...
I1213 08:29:44.174134 10611 main.go:143] libmachine: found existing default network
I1213 08:29:44.174436 10611 main.go:143] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1213 08:29:44.175000 10611 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d30c60}
I1213 08:29:44.175105 10611 main.go:143] libmachine: defining private network:
<network>
<name>mk-addons-917695</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1213 08:29:44.181736 10611 main.go:143] libmachine: creating private network mk-addons-917695 192.168.39.0/24...
I1213 08:29:44.250637 10611 main.go:143] libmachine: private network mk-addons-917695 192.168.39.0/24 created
I1213 08:29:44.251007 10611 main.go:143] libmachine: <network>
<name>mk-addons-917695</name>
<uuid>3c545422-f55e-4a14-8933-1395b1844c41</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:a2:6a:d6'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1213 08:29:44.251047 10611 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695 ...
I1213 08:29:44.251075 10611 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22128-5761/.minikube/cache/iso/amd64/minikube-v1.37.0-1765481609-22101-amd64.iso
I1213 08:29:44.251083 10611 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22128-5761/.minikube
I1213 08:29:44.251163 10611 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22128-5761/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22128-5761/.minikube/cache/iso/amd64/minikube-v1.37.0-1765481609-22101-amd64.iso...
I1213 08:29:44.522355 10611 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa...
I1213 08:29:44.601651 10611 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/addons-917695.rawdisk...
I1213 08:29:44.601690 10611 main.go:143] libmachine: Writing magic tar header
I1213 08:29:44.601710 10611 main.go:143] libmachine: Writing SSH key tar header
I1213 08:29:44.601778 10611 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695 ...
I1213 08:29:44.601833 10611 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695
I1213 08:29:44.601862 10611 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695 (perms=drwx------)
I1213 08:29:44.601878 10611 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22128-5761/.minikube/machines
I1213 08:29:44.601891 10611 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22128-5761/.minikube/machines (perms=drwxr-xr-x)
I1213 08:29:44.601906 10611 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22128-5761/.minikube
I1213 08:29:44.601918 10611 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22128-5761/.minikube (perms=drwxr-xr-x)
I1213 08:29:44.601927 10611 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22128-5761
I1213 08:29:44.601937 10611 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22128-5761 (perms=drwxrwxr-x)
I1213 08:29:44.601947 10611 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1213 08:29:44.601955 10611 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1213 08:29:44.601967 10611 main.go:143] libmachine: checking permissions on dir: /home/jenkins
I1213 08:29:44.601974 10611 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1213 08:29:44.601982 10611 main.go:143] libmachine: checking permissions on dir: /home
I1213 08:29:44.601991 10611 main.go:143] libmachine: skipping /home - not owner
I1213 08:29:44.601995 10611 main.go:143] libmachine: defining domain...
I1213 08:29:44.603276 10611 main.go:143] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-917695</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/addons-917695.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-917695'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1213 08:29:44.611769 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:0a:f6:8b in network default
I1213 08:29:44.612436 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:29:44.612457 10611 main.go:143] libmachine: starting domain...
I1213 08:29:44.612461 10611 main.go:143] libmachine: ensuring networks are active...
I1213 08:29:44.613364 10611 main.go:143] libmachine: Ensuring network default is active
I1213 08:29:44.613802 10611 main.go:143] libmachine: Ensuring network mk-addons-917695 is active
I1213 08:29:44.614490 10611 main.go:143] libmachine: getting domain XML...
I1213 08:29:44.615624 10611 main.go:143] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-917695</name>
<uuid>412eefcb-63ce-429c-917f-a5530725ef67</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/addons-917695.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:4b:48:3f'/>
<source network='mk-addons-917695'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:0a:f6:8b'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1213 08:29:45.919686 10611 main.go:143] libmachine: waiting for domain to start...
I1213 08:29:45.920856 10611 main.go:143] libmachine: domain is now running
I1213 08:29:45.920875 10611 main.go:143] libmachine: waiting for IP...
I1213 08:29:45.921671 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:29:45.922208 10611 main.go:143] libmachine: no network interface addresses found for domain addons-917695 (source=lease)
I1213 08:29:45.922228 10611 main.go:143] libmachine: trying to list again with source=arp
I1213 08:29:45.922511 10611 main.go:143] libmachine: unable to find current IP address of domain addons-917695 in network mk-addons-917695 (interfaces detected: [])
I1213 08:29:45.922549 10611 retry.go:31] will retry after 230.470673ms: waiting for domain to come up
I1213 08:29:46.155199 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:29:46.155775 10611 main.go:143] libmachine: no network interface addresses found for domain addons-917695 (source=lease)
I1213 08:29:46.155794 10611 main.go:143] libmachine: trying to list again with source=arp
I1213 08:29:46.156113 10611 main.go:143] libmachine: unable to find current IP address of domain addons-917695 in network mk-addons-917695 (interfaces detected: [])
I1213 08:29:46.156157 10611 retry.go:31] will retry after 270.816547ms: waiting for domain to come up
I1213 08:29:46.428940 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:29:46.429556 10611 main.go:143] libmachine: no network interface addresses found for domain addons-917695 (source=lease)
I1213 08:29:46.429575 10611 main.go:143] libmachine: trying to list again with source=arp
I1213 08:29:46.429871 10611 main.go:143] libmachine: unable to find current IP address of domain addons-917695 in network mk-addons-917695 (interfaces detected: [])
I1213 08:29:46.429902 10611 retry.go:31] will retry after 384.76637ms: waiting for domain to come up
I1213 08:29:46.816564 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:29:46.817247 10611 main.go:143] libmachine: no network interface addresses found for domain addons-917695 (source=lease)
I1213 08:29:46.817270 10611 main.go:143] libmachine: trying to list again with source=arp
I1213 08:29:46.817742 10611 main.go:143] libmachine: unable to find current IP address of domain addons-917695 in network mk-addons-917695 (interfaces detected: [])
I1213 08:29:46.817795 10611 retry.go:31] will retry after 480.513752ms: waiting for domain to come up
I1213 08:29:47.299921 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:29:47.300523 10611 main.go:143] libmachine: no network interface addresses found for domain addons-917695 (source=lease)
I1213 08:29:47.300545 10611 main.go:143] libmachine: trying to list again with source=arp
I1213 08:29:47.300903 10611 main.go:143] libmachine: unable to find current IP address of domain addons-917695 in network mk-addons-917695 (interfaces detected: [])
I1213 08:29:47.300947 10611 retry.go:31] will retry after 540.854612ms: waiting for domain to come up
I1213 08:29:47.843431 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:29:47.843952 10611 main.go:143] libmachine: no network interface addresses found for domain addons-917695 (source=lease)
I1213 08:29:47.843966 10611 main.go:143] libmachine: trying to list again with source=arp
I1213 08:29:47.844227 10611 main.go:143] libmachine: unable to find current IP address of domain addons-917695 in network mk-addons-917695 (interfaces detected: [])
I1213 08:29:47.844257 10611 retry.go:31] will retry after 759.977685ms: waiting for domain to come up
I1213 08:29:48.606416 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:29:48.606965 10611 main.go:143] libmachine: no network interface addresses found for domain addons-917695 (source=lease)
I1213 08:29:48.606983 10611 main.go:143] libmachine: trying to list again with source=arp
I1213 08:29:48.607342 10611 main.go:143] libmachine: unable to find current IP address of domain addons-917695 in network mk-addons-917695 (interfaces detected: [])
I1213 08:29:48.607380 10611 retry.go:31] will retry after 897.413983ms: waiting for domain to come up
I1213 08:29:49.506692 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:29:49.507407 10611 main.go:143] libmachine: no network interface addresses found for domain addons-917695 (source=lease)
I1213 08:29:49.507433 10611 main.go:143] libmachine: trying to list again with source=arp
I1213 08:29:49.507803 10611 main.go:143] libmachine: unable to find current IP address of domain addons-917695 in network mk-addons-917695 (interfaces detected: [])
I1213 08:29:49.507844 10611 retry.go:31] will retry after 1.273307459s: waiting for domain to come up
I1213 08:29:50.782431 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:29:50.783022 10611 main.go:143] libmachine: no network interface addresses found for domain addons-917695 (source=lease)
I1213 08:29:50.783038 10611 main.go:143] libmachine: trying to list again with source=arp
I1213 08:29:50.783340 10611 main.go:143] libmachine: unable to find current IP address of domain addons-917695 in network mk-addons-917695 (interfaces detected: [])
I1213 08:29:50.783372 10611 retry.go:31] will retry after 1.398779355s: waiting for domain to come up
I1213 08:29:52.184072 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:29:52.184617 10611 main.go:143] libmachine: no network interface addresses found for domain addons-917695 (source=lease)
I1213 08:29:52.184631 10611 main.go:143] libmachine: trying to list again with source=arp
I1213 08:29:52.184920 10611 main.go:143] libmachine: unable to find current IP address of domain addons-917695 in network mk-addons-917695 (interfaces detected: [])
I1213 08:29:52.184950 10611 retry.go:31] will retry after 1.58107352s: waiting for domain to come up
I1213 08:29:53.768449 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:29:53.769119 10611 main.go:143] libmachine: no network interface addresses found for domain addons-917695 (source=lease)
I1213 08:29:53.769139 10611 main.go:143] libmachine: trying to list again with source=arp
I1213 08:29:53.769545 10611 main.go:143] libmachine: unable to find current IP address of domain addons-917695 in network mk-addons-917695 (interfaces detected: [])
I1213 08:29:53.769580 10611 retry.go:31] will retry after 2.212729067s: waiting for domain to come up
I1213 08:29:55.985080 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:29:55.985767 10611 main.go:143] libmachine: no network interface addresses found for domain addons-917695 (source=lease)
I1213 08:29:55.985787 10611 main.go:143] libmachine: trying to list again with source=arp
I1213 08:29:55.986119 10611 main.go:143] libmachine: unable to find current IP address of domain addons-917695 in network mk-addons-917695 (interfaces detected: [])
I1213 08:29:55.986155 10611 retry.go:31] will retry after 2.46066475s: waiting for domain to come up
I1213 08:29:58.449742 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:29:58.450279 10611 main.go:143] libmachine: no network interface addresses found for domain addons-917695 (source=lease)
I1213 08:29:58.450308 10611 main.go:143] libmachine: trying to list again with source=arp
I1213 08:29:58.450616 10611 main.go:143] libmachine: unable to find current IP address of domain addons-917695 in network mk-addons-917695 (interfaces detected: [])
I1213 08:29:58.450652 10611 retry.go:31] will retry after 3.687601265s: waiting for domain to come up
I1213 08:30:02.141825 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:02.142421 10611 main.go:143] libmachine: domain addons-917695 has current primary IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:02.142438 10611 main.go:143] libmachine: found domain IP: 192.168.39.154
I1213 08:30:02.142446 10611 main.go:143] libmachine: reserving static IP address...
I1213 08:30:02.143010 10611 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-917695", mac: "52:54:00:4b:48:3f", ip: "192.168.39.154"} in network mk-addons-917695
I1213 08:30:02.345588 10611 main.go:143] libmachine: reserved static IP address 192.168.39.154 for domain addons-917695
I1213 08:30:02.345614 10611 main.go:143] libmachine: waiting for SSH...
I1213 08:30:02.345622 10611 main.go:143] libmachine: Getting to WaitForSSH function...
I1213 08:30:02.349381 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:02.350030 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:02.350063 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:02.350305 10611 main.go:143] libmachine: Using SSH client type: native
I1213 08:30:02.350527 10611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.154 22 <nil> <nil>}
I1213 08:30:02.350538 10611 main.go:143] libmachine: About to run SSH command:
exit 0
I1213 08:30:02.456411 10611 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1213 08:30:02.457195 10611 main.go:143] libmachine: domain creation complete
I1213 08:30:02.459185 10611 machine.go:94] provisionDockerMachine start ...
I1213 08:30:02.461724 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:02.462101 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:02.462125 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:02.462321 10611 main.go:143] libmachine: Using SSH client type: native
I1213 08:30:02.462501 10611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.154 22 <nil> <nil>}
I1213 08:30:02.462529 10611 main.go:143] libmachine: About to run SSH command:
hostname
I1213 08:30:02.566432 10611 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
I1213 08:30:02.566468 10611 buildroot.go:166] provisioning hostname "addons-917695"
I1213 08:30:02.569643 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:02.570114 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:02.570138 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:02.570342 10611 main.go:143] libmachine: Using SSH client type: native
I1213 08:30:02.570577 10611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.154 22 <nil> <nil>}
I1213 08:30:02.570590 10611 main.go:143] libmachine: About to run SSH command:
sudo hostname addons-917695 && echo "addons-917695" | sudo tee /etc/hostname
I1213 08:30:02.692235 10611 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-917695
I1213 08:30:02.695577 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:02.696070 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:02.696096 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:02.696363 10611 main.go:143] libmachine: Using SSH client type: native
I1213 08:30:02.696597 10611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.154 22 <nil> <nil>}
I1213 08:30:02.696616 10611 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-917695' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-917695/g' /etc/hosts;
else
echo '127.0.1.1 addons-917695' | sudo tee -a /etc/hosts;
fi
fi
I1213 08:30:02.809044 10611 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1213 08:30:02.809074 10611 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22128-5761/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-5761/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-5761/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-5761/.minikube}
I1213 08:30:02.809092 10611 buildroot.go:174] setting up certificates
I1213 08:30:02.809100 10611 provision.go:84] configureAuth start
I1213 08:30:02.811840 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:02.812347 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:02.812376 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:02.814833 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:02.815381 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:02.815409 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:02.815593 10611 provision.go:143] copyHostCerts
I1213 08:30:02.815661 10611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5761/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-5761/.minikube/ca.pem (1078 bytes)
I1213 08:30:02.815822 10611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5761/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-5761/.minikube/cert.pem (1123 bytes)
I1213 08:30:02.815895 10611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5761/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-5761/.minikube/key.pem (1679 bytes)
I1213 08:30:02.815945 10611 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-5761/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-5761/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-5761/.minikube/certs/ca-key.pem org=jenkins.addons-917695 san=[127.0.0.1 192.168.39.154 addons-917695 localhost minikube]
I1213 08:30:02.971240 10611 provision.go:177] copyRemoteCerts
I1213 08:30:02.971317 10611 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1213 08:30:02.974396 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:02.974755 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:02.974781 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:02.974942 10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
I1213 08:30:03.058526 10611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1213 08:30:03.089062 10611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1213 08:30:03.119237 10611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1213 08:30:03.148632 10611 provision.go:87] duration metric: took 339.497846ms to configureAuth
I1213 08:30:03.148670 10611 buildroot.go:189] setting minikube options for container-runtime
I1213 08:30:03.148887 10611 config.go:182] Loaded profile config "addons-917695": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 08:30:03.151380 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:03.151700 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:03.151722 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:03.151912 10611 main.go:143] libmachine: Using SSH client type: native
I1213 08:30:03.152136 10611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.154 22 <nil> <nil>}
I1213 08:30:03.152151 10611 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1213 08:30:03.435285 10611 main.go:143] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1213 08:30:03.435346 10611 machine.go:97] duration metric: took 976.140868ms to provisionDockerMachine
I1213 08:30:03.435362 10611 client.go:176] duration metric: took 19.470235648s to LocalClient.Create
I1213 08:30:03.435379 10611 start.go:167] duration metric: took 19.47029073s to libmachine.API.Create "addons-917695"
I1213 08:30:03.435389 10611 start.go:293] postStartSetup for "addons-917695" (driver="kvm2")
I1213 08:30:03.435398 10611 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1213 08:30:03.435468 10611 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1213 08:30:03.438742 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:03.439250 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:03.439282 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:03.439510 10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
I1213 08:30:03.522537 10611 ssh_runner.go:195] Run: cat /etc/os-release
I1213 08:30:03.527857 10611 info.go:137] Remote host: Buildroot 2025.02
I1213 08:30:03.527889 10611 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5761/.minikube/addons for local assets ...
I1213 08:30:03.527973 10611 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5761/.minikube/files for local assets ...
I1213 08:30:03.527998 10611 start.go:296] duration metric: took 92.603951ms for postStartSetup
I1213 08:30:03.543261 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:03.543779 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:03.543810 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:03.544052 10611 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/config.json ...
I1213 08:30:03.565570 10611 start.go:128] duration metric: took 19.602458116s to createHost
I1213 08:30:03.568840 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:03.569304 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:03.569334 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:03.569596 10611 main.go:143] libmachine: Using SSH client type: native
I1213 08:30:03.569812 10611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.154 22 <nil> <nil>}
I1213 08:30:03.569825 10611 main.go:143] libmachine: About to run SSH command:
date +%s.%N
I1213 08:30:03.675083 10611 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765614603.638603400
I1213 08:30:03.675110 10611 fix.go:216] guest clock: 1765614603.638603400
I1213 08:30:03.675120 10611 fix.go:229] Guest: 2025-12-13 08:30:03.6386034 +0000 UTC Remote: 2025-12-13 08:30:03.565601791 +0000 UTC m=+19.702059265 (delta=73.001609ms)
I1213 08:30:03.675140 10611 fix.go:200] guest clock delta is within tolerance: 73.001609ms
I1213 08:30:03.675146 10611 start.go:83] releasing machines lock for "addons-917695", held for 19.712134993s
I1213 08:30:03.678274 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:03.678743 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:03.678781 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:03.679388 10611 ssh_runner.go:195] Run: cat /version.json
I1213 08:30:03.679466 10611 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1213 08:30:03.682446 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:03.682895 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:03.682898 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:03.682931 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:03.683093 10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
I1213 08:30:03.683410 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:03.683440 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:03.683665 10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
I1213 08:30:03.761124 10611 ssh_runner.go:195] Run: systemctl --version
I1213 08:30:03.797157 10611 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1213 08:30:04.201179 10611 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1213 08:30:04.210744 10611 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1213 08:30:04.210831 10611 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1213 08:30:04.231723 10611 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1213 08:30:04.231749 10611 start.go:496] detecting cgroup driver to use...
I1213 08:30:04.231822 10611 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1213 08:30:04.252259 10611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1213 08:30:04.269142 10611 docker.go:218] disabling cri-docker service (if available) ...
I1213 08:30:04.269213 10611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1213 08:30:04.286696 10611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1213 08:30:04.303615 10611 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1213 08:30:04.451538 10611 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1213 08:30:04.668708 10611 docker.go:234] disabling docker service ...
I1213 08:30:04.668773 10611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1213 08:30:04.686445 10611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1213 08:30:04.702125 10611 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1213 08:30:04.862187 10611 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1213 08:30:05.005473 10611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1213 08:30:05.022254 10611 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1213 08:30:05.045958 10611 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1213 08:30:05.046023 10611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1213 08:30:05.058545 10611 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1213 08:30:05.058613 10611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1213 08:30:05.071231 10611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1213 08:30:05.084958 10611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1213 08:30:05.098034 10611 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1213 08:30:05.111970 10611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1213 08:30:05.125146 10611 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1213 08:30:05.148344 10611 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1213 08:30:05.162450 10611 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1213 08:30:05.173485 10611 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1213 08:30:05.173594 10611 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1213 08:30:05.194469 10611 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1213 08:30:05.206734 10611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1213 08:30:05.349757 10611 ssh_runner.go:195] Run: sudo systemctl restart crio
I1213 08:30:05.458053 10611 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1213 08:30:05.458136 10611 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1213 08:30:05.464185 10611 start.go:564] Will wait 60s for crictl version
I1213 08:30:05.464270 10611 ssh_runner.go:195] Run: which crictl
I1213 08:30:05.468671 10611 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1213 08:30:05.514193 10611 start.go:580] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1213 08:30:05.514332 10611 ssh_runner.go:195] Run: crio --version
I1213 08:30:05.547990 10611 ssh_runner.go:195] Run: crio --version
I1213 08:30:05.580847 10611 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
I1213 08:30:05.585097 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:05.585519 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:05.585542 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:05.585693 10611 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1213 08:30:05.590517 10611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1213 08:30:05.606640 10611 kubeadm.go:884] updating cluster {Name:addons-917695 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-917695 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1213 08:30:05.606774 10611 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1213 08:30:05.606839 10611 ssh_runner.go:195] Run: sudo crictl images --output json
I1213 08:30:05.637201 10611 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
I1213 08:30:05.637265 10611 ssh_runner.go:195] Run: which lz4
I1213 08:30:05.642102 10611 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1213 08:30:05.646970 10611 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1213 08:30:05.647001 10611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
I1213 08:30:06.862207 10611 crio.go:462] duration metric: took 1.220146055s to copy over tarball
I1213 08:30:06.862271 10611 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1213 08:30:08.323918 10611 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.461612722s)
I1213 08:30:08.323954 10611 crio.go:469] duration metric: took 1.461721609s to extract the tarball
I1213 08:30:08.323964 10611 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1213 08:30:08.360231 10611 ssh_runner.go:195] Run: sudo crictl images --output json
I1213 08:30:08.397905 10611 crio.go:514] all images are preloaded for cri-o runtime.
I1213 08:30:08.397930 10611 cache_images.go:86] Images are preloaded, skipping loading
I1213 08:30:08.397937 10611 kubeadm.go:935] updating node { 192.168.39.154 8443 v1.34.2 crio true true} ...
I1213 08:30:08.398022 10611 kubeadm.go:947] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-917695 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.154
[Install]
config:
{KubernetesVersion:v1.34.2 ClusterName:addons-917695 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1213 08:30:08.398107 10611 ssh_runner.go:195] Run: crio config
I1213 08:30:08.444082 10611 cni.go:84] Creating CNI manager for ""
I1213 08:30:08.444114 10611 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1213 08:30:08.444144 10611 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1213 08:30:08.444171 10611 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.154 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-917695 NodeName:addons-917695 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1213 08:30:08.444344 10611 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.154
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-917695"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.154"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.154"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1213 08:30:08.444419 10611 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
I1213 08:30:08.456349 10611 binaries.go:51] Found k8s binaries, skipping transfer
I1213 08:30:08.456431 10611 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1213 08:30:08.468047 10611 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
I1213 08:30:08.488645 10611 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1213 08:30:08.510140 10611 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
I1213 08:30:08.531080 10611 ssh_runner.go:195] Run: grep 192.168.39.154 control-plane.minikube.internal$ /etc/hosts
I1213 08:30:08.535254 10611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.154 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1213 08:30:08.549769 10611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1213 08:30:08.692475 10611 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1213 08:30:08.713922 10611 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695 for IP: 192.168.39.154
I1213 08:30:08.713952 10611 certs.go:195] generating shared ca certs ...
I1213 08:30:08.713972 10611 certs.go:227] acquiring lock for ca certs: {Name:mkfb64e4be02ab559f3d464592a7c41204abf76e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 08:30:08.714156 10611 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-5761/.minikube/ca.key
I1213 08:30:08.791705 10611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-5761/.minikube/ca.crt ...
I1213 08:30:08.791740 10611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5761/.minikube/ca.crt: {Name:mkc8a5af04c5a9b6d079a5530dcd1e6a5fc22e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 08:30:08.791947 10611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-5761/.minikube/ca.key ...
I1213 08:30:08.791963 10611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5761/.minikube/ca.key: {Name:mk614c737742b97b662e74d243aaef69b1ba86df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 08:30:08.792046 10611 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-5761/.minikube/proxy-client-ca.key
I1213 08:30:08.841379 10611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-5761/.minikube/proxy-client-ca.crt ...
I1213 08:30:08.841408 10611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5761/.minikube/proxy-client-ca.crt: {Name:mka883a47275da5988ed8e7035e45264ecf1ce15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 08:30:08.841580 10611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-5761/.minikube/proxy-client-ca.key ...
I1213 08:30:08.841591 10611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5761/.minikube/proxy-client-ca.key: {Name:mkb565d2ac71908ab3d6e138d8cfd0d1be094737 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 08:30:08.841663 10611 certs.go:257] generating profile certs ...
I1213 08:30:08.841722 10611 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.key
I1213 08:30:08.841742 10611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt with IP's: []
I1213 08:30:08.919419 10611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt ...
I1213 08:30:08.919447 10611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.crt: {Name:mk389650a7c35b6e97d3fe3f8f8863c24b68c72f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 08:30:08.919649 10611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.key ...
I1213 08:30:08.919669 10611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/client.key: {Name:mk75631fde15dfff0a6240b3f8eab3a9c72961ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 08:30:08.919801 10611 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/apiserver.key.8436d3b7
I1213 08:30:08.919823 10611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/apiserver.crt.8436d3b7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.154]
I1213 08:30:08.970970 10611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/apiserver.crt.8436d3b7 ...
I1213 08:30:08.970999 10611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/apiserver.crt.8436d3b7: {Name:mka56b0b30da6ad22dddb23e8d79f1e2bcd283ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 08:30:08.971179 10611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/apiserver.key.8436d3b7 ...
I1213 08:30:08.971196 10611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/apiserver.key.8436d3b7: {Name:mk4527f83c536629075891d81bdbc0e535da620d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 08:30:08.971322 10611 certs.go:382] copying /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/apiserver.crt.8436d3b7 -> /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/apiserver.crt
I1213 08:30:08.971402 10611 certs.go:386] copying /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/apiserver.key.8436d3b7 -> /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/apiserver.key
I1213 08:30:08.971461 10611 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/proxy-client.key
I1213 08:30:08.971496 10611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/proxy-client.crt with IP's: []
I1213 08:30:09.020347 10611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/proxy-client.crt ...
I1213 08:30:09.020377 10611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/proxy-client.crt: {Name:mkfb56d2b3c725762104423dd4a518c7879e9dd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 08:30:09.020593 10611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/proxy-client.key ...
I1213 08:30:09.020609 10611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/proxy-client.key: {Name:mk237e900888ac8b10af5100ccf5d85988c42b40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 08:30:09.020821 10611 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5761/.minikube/certs/ca-key.pem (1675 bytes)
I1213 08:30:09.020859 10611 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5761/.minikube/certs/ca.pem (1078 bytes)
I1213 08:30:09.020883 10611 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5761/.minikube/certs/cert.pem (1123 bytes)
I1213 08:30:09.020907 10611 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5761/.minikube/certs/key.pem (1679 bytes)
I1213 08:30:09.021413 10611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1213 08:30:09.053281 10611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1213 08:30:09.082072 10611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1213 08:30:09.112166 10611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1213 08:30:09.142128 10611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1213 08:30:09.171512 10611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1213 08:30:09.200662 10611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1213 08:30:09.230199 10611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/profiles/addons-917695/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1213 08:30:09.259478 10611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5761/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1213 08:30:09.287984 10611 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1213 08:30:09.308129 10611 ssh_runner.go:195] Run: openssl version
I1213 08:30:09.314861 10611 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1213 08:30:09.326923 10611 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1213 08:30:09.338715 10611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1213 08:30:09.344072 10611 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:30 /usr/share/ca-certificates/minikubeCA.pem
I1213 08:30:09.344126 10611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1213 08:30:09.351482 10611 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1213 08:30:09.363385 10611 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1213 08:30:09.374721 10611 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1213 08:30:09.379453 10611 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1213 08:30:09.379545 10611 kubeadm.go:401] StartCluster: {Name:addons-917695 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-917695 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1213 08:30:09.379623 10611 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1213 08:30:09.379684 10611 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1213 08:30:09.421223 10611 cri.go:89] found id: ""
I1213 08:30:09.421315 10611 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1213 08:30:09.449205 10611 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1213 08:30:09.465425 10611 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1213 08:30:09.477431 10611 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1213 08:30:09.477447 10611 kubeadm.go:158] found existing configuration files:
I1213 08:30:09.477510 10611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1213 08:30:09.488492 10611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1213 08:30:09.488561 10611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1213 08:30:09.500908 10611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1213 08:30:09.512424 10611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1213 08:30:09.512502 10611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1213 08:30:09.525170 10611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1213 08:30:09.536320 10611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1213 08:30:09.536392 10611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1213 08:30:09.548489 10611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1213 08:30:09.559795 10611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1213 08:30:09.559855 10611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1213 08:30:09.571108 10611 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1213 08:30:09.619627 10611 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
I1213 08:30:09.619703 10611 kubeadm.go:319] [preflight] Running pre-flight checks
I1213 08:30:09.720533 10611 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1213 08:30:09.720638 10611 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1213 08:30:09.720722 10611 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1213 08:30:09.731665 10611 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1213 08:30:09.734583 10611 out.go:252] - Generating certificates and keys ...
I1213 08:30:09.734681 10611 kubeadm.go:319] [certs] Using existing ca certificate authority
I1213 08:30:09.734741 10611 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1213 08:30:09.809709 10611 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1213 08:30:09.863545 10611 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1213 08:30:10.254146 10611 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1213 08:30:11.018012 10611 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1213 08:30:11.108150 10611 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1213 08:30:11.108328 10611 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-917695 localhost] and IPs [192.168.39.154 127.0.0.1 ::1]
I1213 08:30:11.357906 10611 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1213 08:30:11.358048 10611 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-917695 localhost] and IPs [192.168.39.154 127.0.0.1 ::1]
I1213 08:30:11.655188 10611 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1213 08:30:11.915820 10611 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1213 08:30:12.051535 10611 kubeadm.go:319] [certs] Generating "sa" key and public key
I1213 08:30:12.051796 10611 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1213 08:30:12.154743 10611 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1213 08:30:12.614925 10611 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1213 08:30:12.720348 10611 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1213 08:30:13.012433 10611 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1213 08:30:13.388034 10611 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1213 08:30:13.388149 10611 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1213 08:30:13.390434 10611 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1213 08:30:13.392322 10611 out.go:252] - Booting up control plane ...
I1213 08:30:13.392414 10611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1213 08:30:13.392493 10611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1213 08:30:13.393125 10611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1213 08:30:13.412663 10611 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1213 08:30:13.413014 10611 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1213 08:30:13.419936 10611 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1213 08:30:13.420147 10611 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1213 08:30:13.420233 10611 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1213 08:30:13.630049 10611 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1213 08:30:13.630212 10611 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1213 08:30:15.629709 10611 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001600303s
I1213 08:30:15.632617 10611 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1213 08:30:15.633192 10611 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.154:8443/livez
I1213 08:30:15.633373 10611 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1213 08:30:15.633481 10611 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1213 08:30:18.766310 10611 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.136117608s
I1213 08:30:19.739473 10611 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.110139068s
I1213 08:30:21.630241 10611 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.002152899s
I1213 08:30:21.648478 10611 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1213 08:30:21.663596 10611 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1213 08:30:21.680562 10611 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1213 08:30:21.680816 10611 kubeadm.go:319] [mark-control-plane] Marking the node addons-917695 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1213 08:30:21.697501 10611 kubeadm.go:319] [bootstrap-token] Using token: 0rhxi5.wx2cb5rdzqjx1sa0
I1213 08:30:21.698983 10611 out.go:252] - Configuring RBAC rules ...
I1213 08:30:21.699117 10611 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1213 08:30:21.706345 10611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1213 08:30:21.720074 10611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1213 08:30:21.728448 10611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1213 08:30:21.735225 10611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1213 08:30:21.741391 10611 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1213 08:30:22.040499 10611 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1213 08:30:22.505211 10611 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1213 08:30:23.036185 10611 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1213 08:30:23.039285 10611 kubeadm.go:319]
I1213 08:30:23.039399 10611 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1213 08:30:23.039410 10611 kubeadm.go:319]
I1213 08:30:23.039538 10611 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1213 08:30:23.039605 10611 kubeadm.go:319]
I1213 08:30:23.039648 10611 kubeadm.go:319] mkdir -p $HOME/.kube
I1213 08:30:23.039731 10611 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1213 08:30:23.039805 10611 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1213 08:30:23.039815 10611 kubeadm.go:319]
I1213 08:30:23.039868 10611 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1213 08:30:23.039872 10611 kubeadm.go:319]
I1213 08:30:23.039939 10611 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1213 08:30:23.039954 10611 kubeadm.go:319]
I1213 08:30:23.040024 10611 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1213 08:30:23.040142 10611 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1213 08:30:23.040248 10611 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1213 08:30:23.040258 10611 kubeadm.go:319]
I1213 08:30:23.040379 10611 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1213 08:30:23.040533 10611 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1213 08:30:23.040545 10611 kubeadm.go:319]
I1213 08:30:23.040668 10611 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0rhxi5.wx2cb5rdzqjx1sa0 \
I1213 08:30:23.040809 10611 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:2609ea5c2ad736c8675b310823db9ecbd6716e426dc88532c1b983e6f0047a99 \
I1213 08:30:23.040867 10611 kubeadm.go:319] --control-plane
I1213 08:30:23.040884 10611 kubeadm.go:319]
I1213 08:30:23.040992 10611 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1213 08:30:23.041009 10611 kubeadm.go:319]
I1213 08:30:23.041117 10611 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0rhxi5.wx2cb5rdzqjx1sa0 \
I1213 08:30:23.041278 10611 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:2609ea5c2ad736c8675b310823db9ecbd6716e426dc88532c1b983e6f0047a99
I1213 08:30:23.042878 10611 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1213 08:30:23.042911 10611 cni.go:84] Creating CNI manager for ""
I1213 08:30:23.042922 10611 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1213 08:30:23.045036 10611 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1213 08:30:23.046520 10611 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1213 08:30:23.060325 10611 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1213 08:30:23.082365 10611 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1213 08:30:23.082448 10611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 08:30:23.082490 10611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-917695 minikube.k8s.io/updated_at=2025_12_13T08_30_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453 minikube.k8s.io/name=addons-917695 minikube.k8s.io/primary=true
I1213 08:30:23.231136 10611 ops.go:34] apiserver oom_adj: -16
I1213 08:30:23.231263 10611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 08:30:23.731931 10611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 08:30:24.231883 10611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 08:30:24.731461 10611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 08:30:25.232152 10611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 08:30:25.732336 10611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 08:30:26.231605 10611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 08:30:26.732244 10611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 08:30:27.232029 10611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 08:30:27.731782 10611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 08:30:27.831459 10611 kubeadm.go:1114] duration metric: took 4.749058671s to wait for elevateKubeSystemPrivileges
I1213 08:30:27.831503 10611 kubeadm.go:403] duration metric: took 18.451962979s to StartCluster
I1213 08:30:27.831527 10611 settings.go:142] acquiring lock: {Name:mk0e8a3f7580725c20103c6ec548a6aa0dd069a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 08:30:27.831693 10611 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22128-5761/kubeconfig
I1213 08:30:27.832392 10611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5761/kubeconfig: {Name:mkf140a0b47414a2ed3efe0851d61f10012610de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 08:30:27.832632 10611 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1213 08:30:27.832672 10611 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1213 08:30:27.832717 10611 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1213 08:30:27.832824 10611 addons.go:70] Setting inspektor-gadget=true in profile "addons-917695"
I1213 08:30:27.832846 10611 addons.go:239] Setting addon inspektor-gadget=true in "addons-917695"
I1213 08:30:27.832845 10611 addons.go:70] Setting yakd=true in profile "addons-917695"
I1213 08:30:27.832859 10611 addons.go:239] Setting addon yakd=true in "addons-917695"
I1213 08:30:27.832875 10611 host.go:66] Checking if "addons-917695" exists ...
I1213 08:30:27.832879 10611 addons.go:70] Setting storage-provisioner=true in profile "addons-917695"
I1213 08:30:27.832888 10611 addons.go:239] Setting addon storage-provisioner=true in "addons-917695"
I1213 08:30:27.832877 10611 addons.go:70] Setting registry-creds=true in profile "addons-917695"
I1213 08:30:27.832903 10611 host.go:66] Checking if "addons-917695" exists ...
I1213 08:30:27.832929 10611 addons.go:239] Setting addon registry-creds=true in "addons-917695"
I1213 08:30:27.832909 10611 addons.go:70] Setting default-storageclass=true in profile "addons-917695"
I1213 08:30:27.832946 10611 addons.go:70] Setting volcano=true in profile "addons-917695"
I1213 08:30:27.832962 10611 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-917695"
I1213 08:30:27.832975 10611 addons.go:239] Setting addon volcano=true in "addons-917695"
I1213 08:30:27.832990 10611 host.go:66] Checking if "addons-917695" exists ...
I1213 08:30:27.832993 10611 host.go:66] Checking if "addons-917695" exists ...
I1213 08:30:27.833023 10611 addons.go:70] Setting volumesnapshots=true in profile "addons-917695"
I1213 08:30:27.833033 10611 addons.go:239] Setting addon volumesnapshots=true in "addons-917695"
I1213 08:30:27.833048 10611 host.go:66] Checking if "addons-917695" exists ...
I1213 08:30:27.833601 10611 addons.go:70] Setting cloud-spanner=true in profile "addons-917695"
I1213 08:30:27.833636 10611 addons.go:239] Setting addon cloud-spanner=true in "addons-917695"
I1213 08:30:27.833677 10611 host.go:66] Checking if "addons-917695" exists ...
I1213 08:30:27.833857 10611 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-917695"
I1213 08:30:27.833898 10611 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-917695"
I1213 08:30:27.833933 10611 host.go:66] Checking if "addons-917695" exists ...
I1213 08:30:27.834364 10611 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-917695"
I1213 08:30:27.834388 10611 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-917695"
I1213 08:30:27.834413 10611 host.go:66] Checking if "addons-917695" exists ...
I1213 08:30:27.834460 10611 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-917695"
I1213 08:30:27.834475 10611 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-917695"
I1213 08:30:27.832929 10611 config.go:182] Loaded profile config "addons-917695": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 08:30:27.834582 10611 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-917695"
I1213 08:30:27.834581 10611 addons.go:70] Setting registry=true in profile "addons-917695"
I1213 08:30:27.834599 10611 addons.go:239] Setting addon registry=true in "addons-917695"
I1213 08:30:27.834621 10611 host.go:66] Checking if "addons-917695" exists ...
I1213 08:30:27.834637 10611 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-917695"
I1213 08:30:27.834665 10611 host.go:66] Checking if "addons-917695" exists ...
I1213 08:30:27.834739 10611 out.go:179] * Verifying Kubernetes components...
I1213 08:30:27.834842 10611 addons.go:70] Setting metrics-server=true in profile "addons-917695"
I1213 08:30:27.834859 10611 addons.go:239] Setting addon metrics-server=true in "addons-917695"
I1213 08:30:27.834882 10611 host.go:66] Checking if "addons-917695" exists ...
I1213 08:30:27.835266 10611 addons.go:70] Setting ingress=true in profile "addons-917695"
I1213 08:30:27.835332 10611 addons.go:239] Setting addon ingress=true in "addons-917695"
I1213 08:30:27.835379 10611 host.go:66] Checking if "addons-917695" exists ...
I1213 08:30:27.835515 10611 addons.go:70] Setting ingress-dns=true in profile "addons-917695"
I1213 08:30:27.835533 10611 addons.go:239] Setting addon ingress-dns=true in "addons-917695"
I1213 08:30:27.835566 10611 host.go:66] Checking if "addons-917695" exists ...
I1213 08:30:27.832875 10611 host.go:66] Checking if "addons-917695" exists ...
I1213 08:30:27.836121 10611 addons.go:70] Setting gcp-auth=true in profile "addons-917695"
I1213 08:30:27.836146 10611 mustload.go:66] Loading cluster: addons-917695
I1213 08:30:27.836245 10611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1213 08:30:27.836441 10611 config.go:182] Loaded profile config "addons-917695": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
W1213 08:30:27.840784 10611 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1213 08:30:27.842176 10611 addons.go:239] Setting addon default-storageclass=true in "addons-917695"
I1213 08:30:27.842217 10611 host.go:66] Checking if "addons-917695" exists ...
I1213 08:30:27.842670 10611 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-917695"
I1213 08:30:27.842714 10611 host.go:66] Checking if "addons-917695" exists ...
I1213 08:30:27.842982 10611 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
I1213 08:30:27.844065 10611 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
I1213 08:30:27.844077 10611 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1213 08:30:27.844129 10611 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1213 08:30:27.844181 10611 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1213 08:30:27.844188 10611 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
I1213 08:30:27.844068 10611 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1213 08:30:27.844065 10611 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1213 08:30:27.844156 10611 host.go:66] Checking if "addons-917695" exists ...
I1213 08:30:27.845108 10611 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1213 08:30:27.845108 10611 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1213 08:30:27.845200 10611 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
I1213 08:30:27.845209 10611 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
I1213 08:30:27.845215 10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
I1213 08:30:27.845255 10611 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
I1213 08:30:27.846278 10611 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1213 08:30:27.846311 10611 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1213 08:30:27.846316 10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1213 08:30:27.846325 10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1213 08:30:27.846336 10611 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1213 08:30:27.846352 10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1213 08:30:27.846382 10611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1213 08:30:27.846399 10611 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1213 08:30:27.846278 10611 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
I1213 08:30:27.846494 10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1213 08:30:27.846281 10611 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.6
I1213 08:30:27.845490 10611 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1213 08:30:27.846639 10611 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1213 08:30:27.846671 10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1213 08:30:27.846700 10611 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1213 08:30:27.847012 10611 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1213 08:30:27.847028 10611 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1213 08:30:27.848039 10611 out.go:179] - Using image docker.io/registry:3.0.0
I1213 08:30:27.848066 10611 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1213 08:30:27.848041 10611 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
I1213 08:30:27.848527 10611 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1213 08:30:27.848110 10611 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1213 08:30:27.848595 10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1213 08:30:27.848866 10611 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1213 08:30:27.849928 10611 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
I1213 08:30:27.849944 10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1213 08:30:27.850838 10611 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1213 08:30:27.850838 10611 out.go:179] - Using image docker.io/busybox:stable
I1213 08:30:27.852241 10611 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1213 08:30:27.852329 10611 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1213 08:30:27.852344 10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1213 08:30:27.853494 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.853692 10611 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1213 08:30:27.855113 10611 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1213 08:30:27.855315 10611 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1213 08:30:27.855332 10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1213 08:30:27.855594 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:27.855641 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.856678 10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
I1213 08:30:27.857649 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.857902 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.859011 10611 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1213 08:30:27.860398 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.861207 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:27.861247 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.861492 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.861877 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.861975 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.862184 10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
I1213 08:30:27.862280 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:27.862331 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.862386 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.863319 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:27.863355 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.863355 10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
I1213 08:30:27.863693 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.863713 10611 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1213 08:30:27.863936 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:27.863965 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.863962 10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
I1213 08:30:27.864047 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.864447 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:27.864490 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.864501 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:27.864529 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.864660 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:27.864686 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.864684 10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
I1213 08:30:27.864718 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.864863 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.865125 10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
I1213 08:30:27.865168 10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
I1213 08:30:27.865181 10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
I1213 08:30:27.865549 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:27.865584 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.865636 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:27.865669 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.865907 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:27.865938 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.865945 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:27.865944 10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
I1213 08:30:27.865974 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.865990 10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
I1213 08:30:27.866400 10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
I1213 08:30:27.866409 10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
I1213 08:30:27.866789 10611 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1213 08:30:27.867429 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.867754 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.867796 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:27.867825 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.867976 10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
I1213 08:30:27.868284 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:27.868325 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.868514 10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
I1213 08:30:27.869696 10611 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1213 08:30:27.871007 10611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1213 08:30:27.871019 10611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1213 08:30:27.873775 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.874134 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:27.874154 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:27.874301 10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
W1213 08:30:28.143257 10611 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47486->192.168.39.154:22: read: connection reset by peer
I1213 08:30:28.143302 10611 retry.go:31] will retry after 292.01934ms: ssh: handshake failed: read tcp 192.168.39.1:47486->192.168.39.154:22: read: connection reset by peer
W1213 08:30:28.170403 10611 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47510->192.168.39.154:22: read: connection reset by peer
I1213 08:30:28.170429 10611 retry.go:31] will retry after 182.548903ms: ssh: handshake failed: read tcp 192.168.39.1:47510->192.168.39.154:22: read: connection reset by peer
I1213 08:30:28.605934 10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1213 08:30:28.706092 10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1213 08:30:28.713812 10611 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1213 08:30:28.713857 10611 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1213 08:30:28.780116 10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1213 08:30:28.811347 10611 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1213 08:30:28.811376 10611 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1213 08:30:28.846273 10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1213 08:30:28.846390 10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1213 08:30:28.861107 10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
I1213 08:30:28.862968 10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1213 08:30:28.889213 10611 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1213 08:30:28.889247 10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1213 08:30:28.907404 10611 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
I1213 08:30:28.907434 10611 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1213 08:30:28.970415 10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1213 08:30:29.016064 10611 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1213 08:30:29.016090 10611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1213 08:30:29.028823 10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1213 08:30:29.566519 10611 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
I1213 08:30:29.566547 10611 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1213 08:30:29.601129 10611 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1213 08:30:29.601161 10611 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1213 08:30:29.678813 10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1213 08:30:29.691770 10611 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1213 08:30:29.691800 10611 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1213 08:30:29.696107 10611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1213 08:30:29.696137 10611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1213 08:30:29.714392 10611 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
I1213 08:30:29.714419 10611 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1213 08:30:29.866360 10611 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
I1213 08:30:29.866380 10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1213 08:30:30.047200 10611 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
I1213 08:30:30.047222 10611 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1213 08:30:30.073353 10611 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1213 08:30:30.073378 10611 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1213 08:30:30.112783 10611 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1213 08:30:30.112817 10611 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1213 08:30:30.206035 10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1213 08:30:30.234133 10611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1213 08:30:30.234166 10611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1213 08:30:30.361924 10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1213 08:30:30.370167 10611 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
I1213 08:30:30.370190 10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1213 08:30:30.386060 10611 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1213 08:30:30.386092 10611 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1213 08:30:30.603334 10611 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1213 08:30:30.603358 10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1213 08:30:30.612317 10611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1213 08:30:30.612346 10611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1213 08:30:30.810012 10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.204040098s)
I1213 08:30:30.895573 10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1213 08:30:30.895727 10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1213 08:30:31.143744 10611 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1213 08:30:31.143768 10611 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1213 08:30:31.575253 10611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1213 08:30:31.575279 10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1213 08:30:32.006873 10611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1213 08:30:32.006912 10611 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1213 08:30:32.462084 10611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1213 08:30:32.462113 10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1213 08:30:32.757623 10611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1213 08:30:32.757644 10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1213 08:30:32.976567 10611 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1213 08:30:32.976590 10611 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1213 08:30:33.241117 10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1213 08:30:34.996044 10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.289914118s)
I1213 08:30:34.996109 10611 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.282223771s)
I1213 08:30:34.996174 10611 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.282316729s)
I1213 08:30:34.996197 10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.216049923s)
I1213 08:30:34.996200 10611 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1213 08:30:34.996261 10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.149942576s)
I1213 08:30:34.996307 10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (6.149873382s)
I1213 08:30:34.996882 10611 node_ready.go:35] waiting up to 6m0s for node "addons-917695" to be "Ready" ...
I1213 08:30:35.079473 10611 node_ready.go:49] node "addons-917695" is "Ready"
I1213 08:30:35.079499 10611 node_ready.go:38] duration metric: took 82.598207ms for node "addons-917695" to be "Ready" ...
I1213 08:30:35.079511 10611 api_server.go:52] waiting for apiserver process to appear ...
I1213 08:30:35.079561 10611 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1213 08:30:35.103922 10611 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I1213 08:30:35.139952 10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (6.278807837s)
I1213 08:30:35.140036 10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.277030475s)
I1213 08:30:35.140089 10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.169648644s)
I1213 08:30:35.140165 10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.111311445s)
I1213 08:30:35.302935 10611 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1213 08:30:35.305996 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:35.306574 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:35.306607 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:35.306854 10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
I1213 08:30:35.522419 10611 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-917695" context rescaled to 1 replicas
I1213 08:30:35.868679 10611 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1213 08:30:36.106606 10611 addons.go:239] Setting addon gcp-auth=true in "addons-917695"
I1213 08:30:36.106665 10611 host.go:66] Checking if "addons-917695" exists ...
I1213 08:30:36.108415 10611 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1213 08:30:36.110832 10611 main.go:143] libmachine: domain addons-917695 has defined MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:36.111344 10611 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:48:3f", ip: ""} in network mk-addons-917695: {Iface:virbr1 ExpiryTime:2025-12-13 09:29:59 +0000 UTC Type:0 Mac:52:54:00:4b:48:3f Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:addons-917695 Clientid:01:52:54:00:4b:48:3f}
I1213 08:30:36.111368 10611 main.go:143] libmachine: domain addons-917695 has defined IP address 192.168.39.154 and MAC address 52:54:00:4b:48:3f in network mk-addons-917695
I1213 08:30:36.111588 10611 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22128-5761/.minikube/machines/addons-917695/id_rsa Username:docker}
I1213 08:30:37.057706 10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.378852905s)
I1213 08:30:37.057748 10611 addons.go:495] Verifying addon ingress=true in "addons-917695"
I1213 08:30:37.057792 10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.851713278s)
I1213 08:30:37.057821 10611 addons.go:495] Verifying addon registry=true in "addons-917695"
I1213 08:30:37.057897 10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.695938221s)
I1213 08:30:37.057959 10611 addons.go:495] Verifying addon metrics-server=true in "addons-917695"
I1213 08:30:37.059695 10611 out.go:179] * Verifying ingress addon...
I1213 08:30:37.059786 10611 out.go:179] * Verifying registry addon...
I1213 08:30:37.061469 10611 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1213 08:30:37.061767 10611 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1213 08:30:37.101312 10611 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1213 08:30:37.101340 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:37.126631 10611 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1213 08:30:37.126653 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:37.240080 10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.344457567s)
W1213 08:30:37.240126 10611 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1213 08:30:37.240155 10611 retry.go:31] will retry after 293.996062ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1213 08:30:37.240168 10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.344410602s)
I1213 08:30:37.242252 10611 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-917695 service yakd-dashboard -n yakd-dashboard
I1213 08:30:37.535086 10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1213 08:30:37.573128 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:37.574464 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:38.085030 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:38.085084 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:38.474579 10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.233420261s)
I1213 08:30:38.474612 10611 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.395031039s)
I1213 08:30:38.474621 10611 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-917695"
I1213 08:30:38.474644 10611 api_server.go:72] duration metric: took 10.64193603s to wait for apiserver process to appear ...
I1213 08:30:38.474653 10611 api_server.go:88] waiting for apiserver healthz status ...
I1213 08:30:38.474675 10611 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
I1213 08:30:38.474716 10611 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.366277621s)
I1213 08:30:38.476166 10611 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1213 08:30:38.476235 10611 out.go:179] * Verifying csi-hostpath-driver addon...
I1213 08:30:38.477625 10611 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1213 08:30:38.478133 10611 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1213 08:30:38.478932 10611 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1213 08:30:38.478951 10611 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1213 08:30:38.492937 10611 api_server.go:279] https://192.168.39.154:8443/healthz returned 200:
ok
I1213 08:30:38.496688 10611 api_server.go:141] control plane version: v1.34.2
I1213 08:30:38.496715 10611 api_server.go:131] duration metric: took 22.053346ms to wait for apiserver health ...
I1213 08:30:38.496725 10611 system_pods.go:43] waiting for kube-system pods to appear ...
I1213 08:30:38.509387 10611 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1213 08:30:38.509414 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:38.522447 10611 system_pods.go:59] 20 kube-system pods found
I1213 08:30:38.522490 10611 system_pods.go:61] "amd-gpu-device-plugin-fv8qk" [06ada580-f960-46ba-a686-1cf02b573962] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1213 08:30:38.522503 10611 system_pods.go:61] "coredns-66bc5c9577-jvg44" [43d6b098-f87e-4c86-add2-0ce65ebcd7e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1213 08:30:38.522513 10611 system_pods.go:61] "coredns-66bc5c9577-qk82t" [98132a09-ca4a-4070-b715-3def082d8cd1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1213 08:30:38.522529 10611 system_pods.go:61] "csi-hostpath-attacher-0" [4b1955f9-87f7-4de4-ad2c-e76d9fab8492] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1213 08:30:38.522540 10611 system_pods.go:61] "csi-hostpath-resizer-0" [f666bb51-66c3-4c9e-8d61-f94da690978e] Pending
I1213 08:30:38.522550 10611 system_pods.go:61] "csi-hostpathplugin-gxqlr" [5248a1ff-c04b-4388-952b-7ba796fd30e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1213 08:30:38.522560 10611 system_pods.go:61] "etcd-addons-917695" [85b5d74e-ba25-4520-83c0-4ce3b36b0a68] Running
I1213 08:30:38.522567 10611 system_pods.go:61] "kube-apiserver-addons-917695" [e928775e-45e6-48d0-ae6d-fa836392080b] Running
I1213 08:30:38.522573 10611 system_pods.go:61] "kube-controller-manager-addons-917695" [e3944cf7-0b72-4719-90e0-a1a5a32b41fb] Running
I1213 08:30:38.522581 10611 system_pods.go:61] "kube-ingress-dns-minikube" [40a1c68c-2c20-480c-9339-6eeb11a0e5d4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1213 08:30:38.522587 10611 system_pods.go:61] "kube-proxy-t9crl" [b50a42b7-5b85-4440-b27c-f3a2376cdfac] Running
I1213 08:30:38.522593 10611 system_pods.go:61] "kube-scheduler-addons-917695" [bedc314b-a5cd-4697-917b-a4ebc62ca5f1] Running
I1213 08:30:38.522601 10611 system_pods.go:61] "metrics-server-85b7d694d7-txm49" [b0c671da-5ff1-4882-b011-4feddd170742] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1213 08:30:38.522612 10611 system_pods.go:61] "nvidia-device-plugin-daemonset-fc667" [3cb5ce62-9820-4ff4-a96c-d1dd68c20667] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1213 08:30:38.522628 10611 system_pods.go:61] "registry-6b586f9694-jk6nh" [5b9cee4c-b367-49f4-bc49-497edd267414] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1213 08:30:38.522637 10611 system_pods.go:61] "registry-creds-764b6fb674-rcrdr" [b6c2f09d-b53b-43a4-99f0-a69adbf0ff6b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1213 08:30:38.522644 10611 system_pods.go:61] "registry-proxy-6svfh" [64d7a435-6506-4bba-a294-e2111eee1c24] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1213 08:30:38.522653 10611 system_pods.go:61] "snapshot-controller-7d9fbc56b8-877d8" [c6688aaa-a34c-4ad0-8f0d-2d2100bd7a6a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1213 08:30:38.522661 10611 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pxvwz" [81346d9c-2e81-4f84-9f88-574efa1f58c8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1213 08:30:38.522674 10611 system_pods.go:61] "storage-provisioner" [f88dd7f0-f94c-48ca-a7b0-7461dc3a2e16] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1213 08:30:38.522687 10611 system_pods.go:74] duration metric: took 25.952997ms to wait for pod list to return data ...
I1213 08:30:38.522699 10611 default_sa.go:34] waiting for default service account to be created ...
I1213 08:30:38.573405 10611 default_sa.go:45] found service account: "default"
I1213 08:30:38.573431 10611 default_sa.go:55] duration metric: took 50.72468ms for default service account to be created ...
I1213 08:30:38.573442 10611 system_pods.go:116] waiting for k8s-apps to be running ...
I1213 08:30:38.578981 10611 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1213 08:30:38.579003 10611 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1213 08:30:38.627379 10611 system_pods.go:86] 20 kube-system pods found
I1213 08:30:38.627408 10611 system_pods.go:89] "amd-gpu-device-plugin-fv8qk" [06ada580-f960-46ba-a686-1cf02b573962] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1213 08:30:38.627414 10611 system_pods.go:89] "coredns-66bc5c9577-jvg44" [43d6b098-f87e-4c86-add2-0ce65ebcd7e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1213 08:30:38.627424 10611 system_pods.go:89] "coredns-66bc5c9577-qk82t" [98132a09-ca4a-4070-b715-3def082d8cd1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1213 08:30:38.627433 10611 system_pods.go:89] "csi-hostpath-attacher-0" [4b1955f9-87f7-4de4-ad2c-e76d9fab8492] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1213 08:30:38.627441 10611 system_pods.go:89] "csi-hostpath-resizer-0" [f666bb51-66c3-4c9e-8d61-f94da690978e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1213 08:30:38.627453 10611 system_pods.go:89] "csi-hostpathplugin-gxqlr" [5248a1ff-c04b-4388-952b-7ba796fd30e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1213 08:30:38.627459 10611 system_pods.go:89] "etcd-addons-917695" [85b5d74e-ba25-4520-83c0-4ce3b36b0a68] Running
I1213 08:30:38.627465 10611 system_pods.go:89] "kube-apiserver-addons-917695" [e928775e-45e6-48d0-ae6d-fa836392080b] Running
I1213 08:30:38.627472 10611 system_pods.go:89] "kube-controller-manager-addons-917695" [e3944cf7-0b72-4719-90e0-a1a5a32b41fb] Running
I1213 08:30:38.627480 10611 system_pods.go:89] "kube-ingress-dns-minikube" [40a1c68c-2c20-480c-9339-6eeb11a0e5d4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1213 08:30:38.627488 10611 system_pods.go:89] "kube-proxy-t9crl" [b50a42b7-5b85-4440-b27c-f3a2376cdfac] Running
I1213 08:30:38.627492 10611 system_pods.go:89] "kube-scheduler-addons-917695" [bedc314b-a5cd-4697-917b-a4ebc62ca5f1] Running
I1213 08:30:38.627497 10611 system_pods.go:89] "metrics-server-85b7d694d7-txm49" [b0c671da-5ff1-4882-b011-4feddd170742] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1213 08:30:38.627503 10611 system_pods.go:89] "nvidia-device-plugin-daemonset-fc667" [3cb5ce62-9820-4ff4-a96c-d1dd68c20667] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1213 08:30:38.627517 10611 system_pods.go:89] "registry-6b586f9694-jk6nh" [5b9cee4c-b367-49f4-bc49-497edd267414] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1213 08:30:38.627523 10611 system_pods.go:89] "registry-creds-764b6fb674-rcrdr" [b6c2f09d-b53b-43a4-99f0-a69adbf0ff6b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1213 08:30:38.627532 10611 system_pods.go:89] "registry-proxy-6svfh" [64d7a435-6506-4bba-a294-e2111eee1c24] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1213 08:30:38.627540 10611 system_pods.go:89] "snapshot-controller-7d9fbc56b8-877d8" [c6688aaa-a34c-4ad0-8f0d-2d2100bd7a6a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1213 08:30:38.627567 10611 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pxvwz" [81346d9c-2e81-4f84-9f88-574efa1f58c8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1213 08:30:38.627573 10611 system_pods.go:89] "storage-provisioner" [f88dd7f0-f94c-48ca-a7b0-7461dc3a2e16] Running
I1213 08:30:38.627592 10611 system_pods.go:126] duration metric: took 54.141518ms to wait for k8s-apps to be running ...
I1213 08:30:38.627603 10611 system_svc.go:44] waiting for kubelet service to be running ....
I1213 08:30:38.627647 10611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1213 08:30:38.628744 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:38.629867 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:38.683991 10611 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1213 08:30:38.684013 10611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1213 08:30:38.735669 10611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1213 08:30:38.985020 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:39.066466 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:39.068482 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:39.487093 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:39.527241 10611 system_svc.go:56] duration metric: took 899.630364ms WaitForService to wait for kubelet
I1213 08:30:39.527307 10611 kubeadm.go:587] duration metric: took 11.694578855s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1213 08:30:39.527335 10611 node_conditions.go:102] verifying NodePressure condition ...
I1213 08:30:39.527240 10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.992103835s)
I1213 08:30:39.537667 10611 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1213 08:30:39.537708 10611 node_conditions.go:123] node cpu capacity is 2
I1213 08:30:39.537738 10611 node_conditions.go:105] duration metric: took 10.394784ms to run NodePressure ...
I1213 08:30:39.537756 10611 start.go:242] waiting for startup goroutines ...
I1213 08:30:39.587064 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:39.587066 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:39.995172 10611 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.259463016s)
I1213 08:30:39.996265 10611 addons.go:495] Verifying addon gcp-auth=true in "addons-917695"
I1213 08:30:39.998188 10611 out.go:179] * Verifying gcp-auth addon...
I1213 08:30:39.999985 10611 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1213 08:30:40.027600 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:40.035507 10611 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1213 08:30:40.035533 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:40.090127 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:40.090136 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:40.486206 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:40.506195 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:40.588558 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:40.588561 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:40.987917 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:41.004033 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:41.065804 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:41.072087 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:41.482188 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:41.503673 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:41.583798 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:41.583845 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:41.982171 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:42.004790 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:42.064800 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:42.065348 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:42.481975 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:42.507307 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:42.566009 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:42.566786 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:42.983612 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:43.004320 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:43.066720 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:43.066842 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:43.485235 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:43.504059 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:43.566350 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:43.566646 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:43.983760 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:44.003478 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:44.066839 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:44.069548 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:44.483653 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:44.504698 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:44.575195 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:44.576671 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:44.985141 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:45.004831 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:45.066718 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:45.068134 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:45.481875 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:45.505488 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:45.569783 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:45.569957 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:45.983598 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:46.004267 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:46.067338 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:46.067998 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:46.483364 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:46.504929 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:46.566067 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:46.566133 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:46.982632 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:47.003995 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:47.065575 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:47.066331 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:47.482028 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:47.504276 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:47.565995 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:47.570236 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:47.983080 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:48.004173 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:48.065674 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:48.066225 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:48.481715 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:48.504117 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:48.566360 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:48.566494 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:48.983054 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:49.004063 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:49.066552 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:49.066661 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:49.482261 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:49.506234 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:49.566094 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:49.567065 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:49.982500 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:50.004762 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:50.067543 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:50.070725 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:50.485246 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:50.506370 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:50.567193 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:50.570574 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:50.984933 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:51.005284 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:51.068498 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:51.068552 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:51.483892 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:51.504813 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:51.565719 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:51.565904 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:51.983599 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:52.003442 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:52.066534 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:52.069453 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:52.483144 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:52.507752 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:52.566324 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:52.567101 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:52.982564 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:53.004002 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:53.065321 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:53.065459 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:53.482463 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:53.503596 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:53.564485 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:53.564804 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:53.982349 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:54.003003 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:54.065641 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:54.065867 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:54.482160 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:54.502913 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:54.565404 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:54.566126 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:54.981984 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:55.003993 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:55.065909 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:55.066196 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:55.481440 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:55.503968 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:55.568099 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:55.569080 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:55.982573 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:56.086999 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:56.087022 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:56.087320 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:56.482389 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:56.504224 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:56.566814 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:56.567989 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:56.981165 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:57.003917 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:57.068755 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:57.068924 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:57.482602 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:57.504712 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:57.569162 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:57.569393 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:57.982981 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:58.005162 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:58.066673 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:58.067901 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:58.484819 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:58.503404 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:58.573104 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:58.576347 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:58.985743 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:59.006675 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:59.067032 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:30:59.070093 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:59.482522 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:30:59.505481 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:30:59.564929 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:30:59.566071 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:31:00.027593 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:00.027683 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:00.065747 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:31:00.066341 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:00.485895 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:00.505211 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:00.574016 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:31:00.578810 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:00.987859 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:01.007577 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:01.067885 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:31:01.069095 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:01.485980 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:01.506041 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:01.567201 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:01.568749 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:31:01.985893 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:02.005672 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:02.086304 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:31:02.086512 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:02.482027 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:02.504371 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:02.568384 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:02.570259 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:31:02.983362 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:03.003559 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:03.066128 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:31:03.067808 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:03.482670 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:03.504831 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:03.566182 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:31:03.567321 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:03.982117 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:04.004267 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:04.065483 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:31:04.065641 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:04.482255 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:04.503456 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:04.568521 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:04.568695 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:31:04.985466 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:05.006320 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:05.068385 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:05.068844 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:31:05.483097 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:05.503787 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:05.565025 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:31:05.565718 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:05.984902 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:06.004451 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:06.067876 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:31:06.068887 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:06.483185 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:06.502858 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:06.564992 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:31:06.565041 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:06.981882 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:07.004161 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:07.065144 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:07.065861 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:31:07.482421 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:07.503173 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:07.565878 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:31:07.565989 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:07.982089 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:08.004129 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:08.065843 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:31:08.066433 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:08.482960 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:08.505187 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:08.566262 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:31:08.567815 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:08.984431 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:09.004879 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:09.068864 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:09.069190 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:31:09.481840 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:09.505657 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:09.568467 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:09.571000 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:31:09.982185 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:10.004144 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:10.065616 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:31:10.066736 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:10.486997 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:10.504172 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:10.586870 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:31:10.587106 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:10.982510 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:11.003803 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:11.066836 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 08:31:11.067019 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:11.482118 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:11.503087 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:11.564901 10611 kapi.go:107] duration metric: took 34.503134342s to wait for kubernetes.io/minikube-addons=registry ...
I1213 08:31:11.565276 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:11.982306 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:12.003254 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:12.083728 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:12.483016 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:12.504548 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:12.565470 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:12.984330 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:13.004830 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:13.067456 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:13.482230 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:13.503583 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:13.566449 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:13.983020 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:14.004213 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:14.066226 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:14.485933 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:14.504012 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:14.567765 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:14.986414 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:15.006901 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:15.069960 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:15.481506 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:15.503325 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:15.565504 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:15.983162 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:16.004410 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:16.065974 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:16.481658 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:16.582490 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:16.582554 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:16.984186 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:17.004901 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:17.067389 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:17.482049 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:17.506112 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:17.566534 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:17.985168 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:18.012784 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:18.066929 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:18.483903 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:18.503902 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:18.570104 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:18.982724 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:19.003956 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:19.066246 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:19.482105 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:19.514032 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:19.568435 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:19.982969 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:20.004472 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:20.066897 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:20.484904 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:20.508261 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:20.616095 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:20.983561 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:21.003590 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:21.065723 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:21.676851 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:21.677183 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:21.678514 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:21.983399 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:22.003767 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:22.065524 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:22.483554 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:22.504872 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:22.572686 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:22.985539 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:23.003546 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:23.065819 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:23.482887 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:23.503886 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:23.588566 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:23.985080 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:24.006108 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:24.067174 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:24.485097 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:24.503706 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:24.569361 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:24.986780 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:25.005115 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:25.086009 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:25.483731 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:25.504727 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:25.567565 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:26.002117 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:26.006914 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:26.066871 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:26.483107 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:26.506027 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:26.565692 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:26.982913 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:27.004024 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:27.067242 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:27.485749 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:27.503569 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:27.570712 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:27.984047 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:28.004863 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:28.065251 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:28.482935 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:28.503394 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:28.582973 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:28.982508 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:29.007042 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:29.065664 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:29.484670 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:29.505670 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:29.564830 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:29.982818 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:30.003837 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:30.065846 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:30.488532 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:30.504498 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:30.568722 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:30.982928 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:31.003705 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:31.065081 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:31.481834 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:31.505587 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:31.567532 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:31.984769 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:32.004156 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:32.069464 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:32.487859 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:32.508318 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:32.568384 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:32.989950 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:33.007150 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:33.066004 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:33.482278 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:33.504674 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:33.565557 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:33.983527 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:34.005459 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:34.070351 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:34.483494 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:34.506654 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:34.565852 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:34.983973 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:35.004047 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:35.068743 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:35.484138 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:35.503800 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:35.566157 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:35.987763 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:36.005793 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:36.094913 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:36.484634 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:36.507267 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:36.566351 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:36.983988 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:37.005299 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:37.069139 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:37.482671 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:37.503705 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:37.565323 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:37.984077 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:38.003950 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:38.067904 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:38.487006 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:38.505982 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:38.567888 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:39.009660 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:39.009837 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:39.068484 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:39.482398 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:39.502917 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:39.582813 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:39.987192 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:40.012946 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:40.187826 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:40.483684 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:40.503677 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:40.564892 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:40.985214 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:41.004344 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:41.068490 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:41.482798 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:41.508898 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:41.569680 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:41.984648 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:42.003345 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:42.071211 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:42.484375 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:42.503285 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:42.567312 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:42.985228 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:43.004314 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:43.066178 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:43.481328 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:43.506803 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:43.568450 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:43.983227 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:44.004136 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:44.067780 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:44.485793 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:44.584848 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:44.585007 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:44.990775 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:45.009358 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:45.091871 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:45.484910 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:45.505062 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:45.568693 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:45.982807 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:46.004366 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:46.067811 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:46.483959 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:46.504651 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:46.565454 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:46.983387 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:47.007241 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:47.066960 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:47.481462 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:47.503386 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:47.567171 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:47.982363 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:48.003382 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:48.065591 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:48.483695 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:48.504325 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:48.568833 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:48.982407 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:49.007093 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:49.083771 10611 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 08:31:49.486163 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:49.586071 10611 kapi.go:107] duration metric: took 1m12.524599352s to wait for app.kubernetes.io/name=ingress-nginx ...
I1213 08:31:49.587169 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:49.983009 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:50.003792 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:50.483676 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:50.504540 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:50.986538 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:51.087881 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:51.482411 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 08:31:51.502953 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:51.982182 10611 kapi.go:107] duration metric: took 1m13.504044327s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1213 08:31:52.002717 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:52.503253 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:53.004951 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:53.505590 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:54.007819 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:54.505381 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:55.006248 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:55.504378 10611 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 08:31:56.004727 10611 kapi.go:107] duration metric: took 1m16.004739814s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1213 08:31:56.006695 10611 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-917695 cluster.
I1213 08:31:56.008309 10611 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1213 08:31:56.009790 10611 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1213 08:31:56.011544 10611 out.go:179] * Enabled addons: registry-creds, cloud-spanner, amd-gpu-device-plugin, inspektor-gadget, ingress-dns, nvidia-device-plugin, storage-provisioner, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
I1213 08:31:56.013091 10611 addons.go:530] duration metric: took 1m28.180365124s for enable addons: enabled=[registry-creds cloud-spanner amd-gpu-device-plugin inspektor-gadget ingress-dns nvidia-device-plugin storage-provisioner storage-provisioner-rancher metrics-server yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
I1213 08:31:56.013143 10611 start.go:247] waiting for cluster config update ...
I1213 08:31:56.013177 10611 start.go:256] writing updated cluster config ...
I1213 08:31:56.013467 10611 ssh_runner.go:195] Run: rm -f paused
I1213 08:31:56.021079 10611 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1213 08:31:56.024741 10611 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qk82t" in "kube-system" namespace to be "Ready" or be gone ...
I1213 08:31:56.031050 10611 pod_ready.go:94] pod "coredns-66bc5c9577-qk82t" is "Ready"
I1213 08:31:56.031078 10611 pod_ready.go:86] duration metric: took 6.311424ms for pod "coredns-66bc5c9577-qk82t" in "kube-system" namespace to be "Ready" or be gone ...
I1213 08:31:56.034031 10611 pod_ready.go:83] waiting for pod "etcd-addons-917695" in "kube-system" namespace to be "Ready" or be gone ...
I1213 08:31:56.040587 10611 pod_ready.go:94] pod "etcd-addons-917695" is "Ready"
I1213 08:31:56.040611 10611 pod_ready.go:86] duration metric: took 6.557647ms for pod "etcd-addons-917695" in "kube-system" namespace to be "Ready" or be gone ...
I1213 08:31:56.043032 10611 pod_ready.go:83] waiting for pod "kube-apiserver-addons-917695" in "kube-system" namespace to be "Ready" or be gone ...
I1213 08:31:56.047769 10611 pod_ready.go:94] pod "kube-apiserver-addons-917695" is "Ready"
I1213 08:31:56.047792 10611 pod_ready.go:86] duration metric: took 4.739875ms for pod "kube-apiserver-addons-917695" in "kube-system" namespace to be "Ready" or be gone ...
I1213 08:31:56.050486 10611 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-917695" in "kube-system" namespace to be "Ready" or be gone ...
I1213 08:31:56.425867 10611 pod_ready.go:94] pod "kube-controller-manager-addons-917695" is "Ready"
I1213 08:31:56.425899 10611 pod_ready.go:86] duration metric: took 375.37084ms for pod "kube-controller-manager-addons-917695" in "kube-system" namespace to be "Ready" or be gone ...
I1213 08:31:56.625569 10611 pod_ready.go:83] waiting for pod "kube-proxy-t9crl" in "kube-system" namespace to be "Ready" or be gone ...
I1213 08:31:57.025548 10611 pod_ready.go:94] pod "kube-proxy-t9crl" is "Ready"
I1213 08:31:57.025574 10611 pod_ready.go:86] duration metric: took 399.982799ms for pod "kube-proxy-t9crl" in "kube-system" namespace to be "Ready" or be gone ...
I1213 08:31:57.225609 10611 pod_ready.go:83] waiting for pod "kube-scheduler-addons-917695" in "kube-system" namespace to be "Ready" or be gone ...
I1213 08:31:57.628536 10611 pod_ready.go:94] pod "kube-scheduler-addons-917695" is "Ready"
I1213 08:31:57.628564 10611 pod_ready.go:86] duration metric: took 402.924944ms for pod "kube-scheduler-addons-917695" in "kube-system" namespace to be "Ready" or be gone ...
I1213 08:31:57.628575 10611 pod_ready.go:40] duration metric: took 1.607467659s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1213 08:31:57.672484 10611 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
I1213 08:31:57.674528 10611 out.go:179] * Done! kubectl is now configured to use "addons-917695" cluster and "default" namespace by default
==> CRI-O <==
Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.047553995Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765614906047524137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:546838,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21c3845d-3cfe-4cb7-9cd0-6a64f4274f7e name=/runtime.v1.ImageService/ImageFsInfo
Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.048686936Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=435f0f49-a32c-4a1d-a356-c8c1b1dc3a54 name=/runtime.v1.RuntimeService/ListContainers
Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.048859936Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=435f0f49-a32c-4a1d-a356-c8c1b1dc3a54 name=/runtime.v1.RuntimeService/ListContainers
Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.049203162Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f6d4159bb313b29b848d2a01545e2bd972786fdba122275f9cf4a27684260fc,PodSandboxId:68a2d4cc7bf7c1e50e10d7b1c4038ef53b43a7696c313896c584514f02490911,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765614764509897600,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01c1c75f-6820-4ed0-adec-927c0fe8b534,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a463f2fb926733e1d47efa227c43a1d469c24d567b16b61d48151c6df2d0dbc,PodSandboxId:366bee97de5b2407f50a1cbb1f93cff7abe3fb1fb256d81f9b05d62ed63f07e7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765614721906207640,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7c1bba69-7ed7-4165-8c95-96b84fd3c6d0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7459c2ce3b80a40a713d64a6e31e0a0423bbbcfa2489d1fe378bf461d9f8794,PodSandboxId:d54d5ead7c19f583399a250329c4254906f884ca7cffab8ab3e2af0976fb791d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765614709043028065,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-bzgr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3b71930-7660-460b-b10c-f3b1e7fb90be,},Annotations:map[string]string{io.kubernetes.container.hash: 6f36061b,io.kub
ernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:76a36cc5fd27a762e3fc18f5dd652e00b09141db60c88a72a8bf8a03adbd4e95,PodSandboxId:61a8f89598c8e1635ebff114cb1f3393a11c47ccdaf70dcdfa1c2590e2423c4b,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:
1765614683576511008,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mbzzw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0b459c4b-cbd7-458f-8a23-5427d16adf42,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2574200a635574968d79ecba41e8bfb8af1e18d33fe1a7a34571011663a1a2b,PodSandboxId:b6f69c472fea5e80036a1a65fa66e34515557c416a991df445af651a469529de,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179
e,State:CONTAINER_EXITED,CreatedAt:1765614683247589832,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jwjv9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 753a5a02-7f66-43ff-9f26-b67823a58f51,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02f24c26b90be89ee1b39bc07bd77d8406e296fe18a2dfc7692a5fb767a975fc,PodSandboxId:f6e4b3a7f3b2cebda49bcac9249fd18530a7b6a80cf8de63b189f61befe03520,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777
a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765614662567685700,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40a1c68c-2c20-480c-9339-6eeb11a0e5d4,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6018cebb698122a367e923d21ef2146b5952bf1db623c039d9c5bc8f4edb460,PodSandboxId:ded4eed2460929396d294695eaacb3e679ce19e5fdb445bb7e9bc2bbd6e92a7b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f
8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765614639585809431,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-fv8qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06ada580-f960-46ba-a686-1cf02b573962,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9fa976e19fa711cb458e6215fbe3e10fe811f008e25fe3c430cca26ed33945,PodSandboxId:4de8c11681f0a51f3ee7cc30dbbbbf7d7a490a025704aaa163480673131248f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530
d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765614636677728568,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f88dd7f0-f94c-48ca-a7b0-7461dc3a2e16,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c394f439277efeb86bf03ebd00c3259d06c6fc6f983dfbd688bf7df9bbb81d00,PodSandboxId:165bfc56736b4f9e8f5e4ae2f75baed13c8e177ab7047c95ed60cd8de8a59690,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e544396
9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765614629365625890,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qk82t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98132a09-ca4a-4070-b715-3def082d8cd1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb50ab34a166440aa33a57ee5c71f03a547a7abca69be43821c017e4f089d55d,PodSandboxId:42096456024fabaf7c4a400ccaa456f62b5a69ae56edf9fe104d1bb0d4110f79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765614628596986809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t9crl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b50a42b7-5b85-4440-b27c-f3a2376cdfac,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGraceP
eriod: 30,},},&Container{Id:3cc9e2f1a4cb6ef6a44404dcb1587a4e43a789750fcf7cfe3525d4470a02049a,PodSandboxId:5666287ef23de91755fcd81697f8d770d1d8f097f014b8fdc5daa078003f6d25,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765614616381250543,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45f5da8a7d034120e63f54164f74715c,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5dbe8cce3d6ce5291793ae044ed8d5541cff87277434acc4b3df5605b7bcb49,PodSandboxId:3a1ef997a680ca0ba3454fc0be74272338e93d10942c95f2a4bbedbe9958a341,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765614616348314996,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfde6bd3d488a9800f2e4971558d5ab9,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"
TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26e3580417c1e7a17a97e5bc733321c2e8a09689aef96d95594f00eb9208bca8,PodSandboxId:7d2cf9d2fda04cba1cdbe9ec1ca44112ef1f66f67d38e63178f17f420da73dc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765614616293712024,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a4e927105679e7941071b339f30dde,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubern
etes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c211ba7e8e2758a6034624ff132895246615851f7010c72af0d58d5dcc29b4,PodSandboxId:b1eddffe49be62022cc7f3005046ed23d842c0663dfcc83b3b8439048a31322d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765614615893104196,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a60b87e63
5a2b52b03e000348992f684,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=435f0f49-a32c-4a1d-a356-c8c1b1dc3a54 name=/runtime.v1.RuntimeService/ListContainers
Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.085294295Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ce1c569-c57f-474b-a0c7-40503b1be067 name=/runtime.v1.RuntimeService/Version
Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.085428445Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ce1c569-c57f-474b-a0c7-40503b1be067 name=/runtime.v1.RuntimeService/Version
Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.087132350Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d1991686-5c39-45c4-ab51-f53cd13cf547 name=/runtime.v1.ImageService/ImageFsInfo
Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.088416557Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765614906088384974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:546838,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d1991686-5c39-45c4-ab51-f53cd13cf547 name=/runtime.v1.ImageService/ImageFsInfo
Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.089723297Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2cd3c1d8-3f56-4c5d-8a71-2bd017dfa094 name=/runtime.v1.RuntimeService/ListContainers
Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.089840229Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2cd3c1d8-3f56-4c5d-8a71-2bd017dfa094 name=/runtime.v1.RuntimeService/ListContainers
Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.090736335Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f6d4159bb313b29b848d2a01545e2bd972786fdba122275f9cf4a27684260fc,PodSandboxId:68a2d4cc7bf7c1e50e10d7b1c4038ef53b43a7696c313896c584514f02490911,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765614764509897600,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01c1c75f-6820-4ed0-adec-927c0fe8b534,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a463f2fb926733e1d47efa227c43a1d469c24d567b16b61d48151c6df2d0dbc,PodSandboxId:366bee97de5b2407f50a1cbb1f93cff7abe3fb1fb256d81f9b05d62ed63f07e7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765614721906207640,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7c1bba69-7ed7-4165-8c95-96b84fd3c6d0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7459c2ce3b80a40a713d64a6e31e0a0423bbbcfa2489d1fe378bf461d9f8794,PodSandboxId:d54d5ead7c19f583399a250329c4254906f884ca7cffab8ab3e2af0976fb791d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765614709043028065,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-bzgr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3b71930-7660-460b-b10c-f3b1e7fb90be,},Annotations:map[string]string{io.kubernetes.container.hash: 6f36061b,io.kub
ernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:76a36cc5fd27a762e3fc18f5dd652e00b09141db60c88a72a8bf8a03adbd4e95,PodSandboxId:61a8f89598c8e1635ebff114cb1f3393a11c47ccdaf70dcdfa1c2590e2423c4b,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:
1765614683576511008,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mbzzw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0b459c4b-cbd7-458f-8a23-5427d16adf42,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2574200a635574968d79ecba41e8bfb8af1e18d33fe1a7a34571011663a1a2b,PodSandboxId:b6f69c472fea5e80036a1a65fa66e34515557c416a991df445af651a469529de,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179
e,State:CONTAINER_EXITED,CreatedAt:1765614683247589832,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jwjv9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 753a5a02-7f66-43ff-9f26-b67823a58f51,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02f24c26b90be89ee1b39bc07bd77d8406e296fe18a2dfc7692a5fb767a975fc,PodSandboxId:f6e4b3a7f3b2cebda49bcac9249fd18530a7b6a80cf8de63b189f61befe03520,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777
a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765614662567685700,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40a1c68c-2c20-480c-9339-6eeb11a0e5d4,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6018cebb698122a367e923d21ef2146b5952bf1db623c039d9c5bc8f4edb460,PodSandboxId:ded4eed2460929396d294695eaacb3e679ce19e5fdb445bb7e9bc2bbd6e92a7b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f
8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765614639585809431,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-fv8qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06ada580-f960-46ba-a686-1cf02b573962,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9fa976e19fa711cb458e6215fbe3e10fe811f008e25fe3c430cca26ed33945,PodSandboxId:4de8c11681f0a51f3ee7cc30dbbbbf7d7a490a025704aaa163480673131248f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530
d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765614636677728568,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f88dd7f0-f94c-48ca-a7b0-7461dc3a2e16,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c394f439277efeb86bf03ebd00c3259d06c6fc6f983dfbd688bf7df9bbb81d00,PodSandboxId:165bfc56736b4f9e8f5e4ae2f75baed13c8e177ab7047c95ed60cd8de8a59690,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e544396
9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765614629365625890,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qk82t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98132a09-ca4a-4070-b715-3def082d8cd1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb50ab34a166440aa33a57ee5c71f03a547a7abca69be43821c017e4f089d55d,PodSandboxId:42096456024fabaf7c4a400ccaa456f62b5a69ae56edf9fe104d1bb0d4110f79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765614628596986809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t9crl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b50a42b7-5b85-4440-b27c-f3a2376cdfac,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGraceP
eriod: 30,},},&Container{Id:3cc9e2f1a4cb6ef6a44404dcb1587a4e43a789750fcf7cfe3525d4470a02049a,PodSandboxId:5666287ef23de91755fcd81697f8d770d1d8f097f014b8fdc5daa078003f6d25,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765614616381250543,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45f5da8a7d034120e63f54164f74715c,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5dbe8cce3d6ce5291793ae044ed8d5541cff87277434acc4b3df5605b7bcb49,PodSandboxId:3a1ef997a680ca0ba3454fc0be74272338e93d10942c95f2a4bbedbe9958a341,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765614616348314996,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfde6bd3d488a9800f2e4971558d5ab9,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"
TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26e3580417c1e7a17a97e5bc733321c2e8a09689aef96d95594f00eb9208bca8,PodSandboxId:7d2cf9d2fda04cba1cdbe9ec1ca44112ef1f66f67d38e63178f17f420da73dc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765614616293712024,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a4e927105679e7941071b339f30dde,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubern
etes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c211ba7e8e2758a6034624ff132895246615851f7010c72af0d58d5dcc29b4,PodSandboxId:b1eddffe49be62022cc7f3005046ed23d842c0663dfcc83b3b8439048a31322d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765614615893104196,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a60b87e63
5a2b52b03e000348992f684,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2cd3c1d8-3f56-4c5d-8a71-2bd017dfa094 name=/runtime.v1.RuntimeService/ListContainers
Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.124069366Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50975def-74d3-4846-b608-07fea916e659 name=/runtime.v1.RuntimeService/Version
Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.124181480Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50975def-74d3-4846-b608-07fea916e659 name=/runtime.v1.RuntimeService/Version
Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.126118275Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=59e28ff8-1e85-43f5-8e61-46f96be2ddd0 name=/runtime.v1.ImageService/ImageFsInfo
Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.128163708Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765614906128133150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:546838,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=59e28ff8-1e85-43f5-8e61-46f96be2ddd0 name=/runtime.v1.ImageService/ImageFsInfo
Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.129190963Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e28964b9-c44e-4145-bfa2-e7b8bf00dd94 name=/runtime.v1.RuntimeService/ListContainers
Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.129248146Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e28964b9-c44e-4145-bfa2-e7b8bf00dd94 name=/runtime.v1.RuntimeService/ListContainers
Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.129674802Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f6d4159bb313b29b848d2a01545e2bd972786fdba122275f9cf4a27684260fc,PodSandboxId:68a2d4cc7bf7c1e50e10d7b1c4038ef53b43a7696c313896c584514f02490911,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765614764509897600,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01c1c75f-6820-4ed0-adec-927c0fe8b534,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a463f2fb926733e1d47efa227c43a1d469c24d567b16b61d48151c6df2d0dbc,PodSandboxId:366bee97de5b2407f50a1cbb1f93cff7abe3fb1fb256d81f9b05d62ed63f07e7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765614721906207640,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7c1bba69-7ed7-4165-8c95-96b84fd3c6d0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7459c2ce3b80a40a713d64a6e31e0a0423bbbcfa2489d1fe378bf461d9f8794,PodSandboxId:d54d5ead7c19f583399a250329c4254906f884ca7cffab8ab3e2af0976fb791d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765614709043028065,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-bzgr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3b71930-7660-460b-b10c-f3b1e7fb90be,},Annotations:map[string]string{io.kubernetes.container.hash: 6f36061b,io.kub
ernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:76a36cc5fd27a762e3fc18f5dd652e00b09141db60c88a72a8bf8a03adbd4e95,PodSandboxId:61a8f89598c8e1635ebff114cb1f3393a11c47ccdaf70dcdfa1c2590e2423c4b,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:
1765614683576511008,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mbzzw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0b459c4b-cbd7-458f-8a23-5427d16adf42,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2574200a635574968d79ecba41e8bfb8af1e18d33fe1a7a34571011663a1a2b,PodSandboxId:b6f69c472fea5e80036a1a65fa66e34515557c416a991df445af651a469529de,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179
e,State:CONTAINER_EXITED,CreatedAt:1765614683247589832,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jwjv9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 753a5a02-7f66-43ff-9f26-b67823a58f51,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02f24c26b90be89ee1b39bc07bd77d8406e296fe18a2dfc7692a5fb767a975fc,PodSandboxId:f6e4b3a7f3b2cebda49bcac9249fd18530a7b6a80cf8de63b189f61befe03520,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777
a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765614662567685700,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40a1c68c-2c20-480c-9339-6eeb11a0e5d4,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6018cebb698122a367e923d21ef2146b5952bf1db623c039d9c5bc8f4edb460,PodSandboxId:ded4eed2460929396d294695eaacb3e679ce19e5fdb445bb7e9bc2bbd6e92a7b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f
8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765614639585809431,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-fv8qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06ada580-f960-46ba-a686-1cf02b573962,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9fa976e19fa711cb458e6215fbe3e10fe811f008e25fe3c430cca26ed33945,PodSandboxId:4de8c11681f0a51f3ee7cc30dbbbbf7d7a490a025704aaa163480673131248f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530
d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765614636677728568,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f88dd7f0-f94c-48ca-a7b0-7461dc3a2e16,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c394f439277efeb86bf03ebd00c3259d06c6fc6f983dfbd688bf7df9bbb81d00,PodSandboxId:165bfc56736b4f9e8f5e4ae2f75baed13c8e177ab7047c95ed60cd8de8a59690,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e544396
9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765614629365625890,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qk82t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98132a09-ca4a-4070-b715-3def082d8cd1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb50ab34a166440aa33a57ee5c71f03a547a7abca69be43821c017e4f089d55d,PodSandboxId:42096456024fabaf7c4a400ccaa456f62b5a69ae56edf9fe104d1bb0d4110f79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765614628596986809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t9crl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b50a42b7-5b85-4440-b27c-f3a2376cdfac,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGraceP
eriod: 30,},},&Container{Id:3cc9e2f1a4cb6ef6a44404dcb1587a4e43a789750fcf7cfe3525d4470a02049a,PodSandboxId:5666287ef23de91755fcd81697f8d770d1d8f097f014b8fdc5daa078003f6d25,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765614616381250543,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45f5da8a7d034120e63f54164f74715c,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5dbe8cce3d6ce5291793ae044ed8d5541cff87277434acc4b3df5605b7bcb49,PodSandboxId:3a1ef997a680ca0ba3454fc0be74272338e93d10942c95f2a4bbedbe9958a341,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765614616348314996,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfde6bd3d488a9800f2e4971558d5ab9,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"
TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26e3580417c1e7a17a97e5bc733321c2e8a09689aef96d95594f00eb9208bca8,PodSandboxId:7d2cf9d2fda04cba1cdbe9ec1ca44112ef1f66f67d38e63178f17f420da73dc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765614616293712024,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a4e927105679e7941071b339f30dde,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubern
etes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c211ba7e8e2758a6034624ff132895246615851f7010c72af0d58d5dcc29b4,PodSandboxId:b1eddffe49be62022cc7f3005046ed23d842c0663dfcc83b3b8439048a31322d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765614615893104196,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a60b87e63
5a2b52b03e000348992f684,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e28964b9-c44e-4145-bfa2-e7b8bf00dd94 name=/runtime.v1.RuntimeService/ListContainers
Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.161039888Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9fba7d92-75dc-432b-b001-cda006972dda name=/runtime.v1.RuntimeService/Version
Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.161361433Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9fba7d92-75dc-432b-b001-cda006972dda name=/runtime.v1.RuntimeService/Version
Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.163209917Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=050d05fd-961f-41ba-85dc-c2b3fa6d4627 name=/runtime.v1.ImageService/ImageFsInfo
Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.164401387Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765614906164372834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:546838,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=050d05fd-961f-41ba-85dc-c2b3fa6d4627 name=/runtime.v1.ImageService/ImageFsInfo
Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.166106344Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=72a2967d-c952-420c-84be-c1ca25690f16 name=/runtime.v1.RuntimeService/ListContainers
Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.166280023Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=72a2967d-c952-420c-84be-c1ca25690f16 name=/runtime.v1.RuntimeService/ListContainers
Dec 13 08:35:06 addons-917695 crio[820]: time="2025-12-13 08:35:06.166673187Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1f6d4159bb313b29b848d2a01545e2bd972786fdba122275f9cf4a27684260fc,PodSandboxId:68a2d4cc7bf7c1e50e10d7b1c4038ef53b43a7696c313896c584514f02490911,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765614764509897600,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01c1c75f-6820-4ed0-adec-927c0fe8b534,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a463f2fb926733e1d47efa227c43a1d469c24d567b16b61d48151c6df2d0dbc,PodSandboxId:366bee97de5b2407f50a1cbb1f93cff7abe3fb1fb256d81f9b05d62ed63f07e7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765614721906207640,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7c1bba69-7ed7-4165-8c95-96b84fd3c6d0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7459c2ce3b80a40a713d64a6e31e0a0423bbbcfa2489d1fe378bf461d9f8794,PodSandboxId:d54d5ead7c19f583399a250329c4254906f884ca7cffab8ab3e2af0976fb791d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765614709043028065,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-bzgr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c3b71930-7660-460b-b10c-f3b1e7fb90be,},Annotations:map[string]string{io.kubernetes.container.hash: 6f36061b,io.kub
ernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:76a36cc5fd27a762e3fc18f5dd652e00b09141db60c88a72a8bf8a03adbd4e95,PodSandboxId:61a8f89598c8e1635ebff114cb1f3393a11c47ccdaf70dcdfa1c2590e2423c4b,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:
1765614683576511008,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-mbzzw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0b459c4b-cbd7-458f-8a23-5427d16adf42,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2574200a635574968d79ecba41e8bfb8af1e18d33fe1a7a34571011663a1a2b,PodSandboxId:b6f69c472fea5e80036a1a65fa66e34515557c416a991df445af651a469529de,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179
e,State:CONTAINER_EXITED,CreatedAt:1765614683247589832,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jwjv9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 753a5a02-7f66-43ff-9f26-b67823a58f51,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02f24c26b90be89ee1b39bc07bd77d8406e296fe18a2dfc7692a5fb767a975fc,PodSandboxId:f6e4b3a7f3b2cebda49bcac9249fd18530a7b6a80cf8de63b189f61befe03520,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777
a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765614662567685700,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40a1c68c-2c20-480c-9339-6eeb11a0e5d4,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6018cebb698122a367e923d21ef2146b5952bf1db623c039d9c5bc8f4edb460,PodSandboxId:ded4eed2460929396d294695eaacb3e679ce19e5fdb445bb7e9bc2bbd6e92a7b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f
8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765614639585809431,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-fv8qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06ada580-f960-46ba-a686-1cf02b573962,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9fa976e19fa711cb458e6215fbe3e10fe811f008e25fe3c430cca26ed33945,PodSandboxId:4de8c11681f0a51f3ee7cc30dbbbbf7d7a490a025704aaa163480673131248f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530
d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765614636677728568,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f88dd7f0-f94c-48ca-a7b0-7461dc3a2e16,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c394f439277efeb86bf03ebd00c3259d06c6fc6f983dfbd688bf7df9bbb81d00,PodSandboxId:165bfc56736b4f9e8f5e4ae2f75baed13c8e177ab7047c95ed60cd8de8a59690,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e544396
9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765614629365625890,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qk82t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98132a09-ca4a-4070-b715-3def082d8cd1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb50ab34a166440aa33a57ee5c71f03a547a7abca69be43821c017e4f089d55d,PodSandboxId:42096456024fabaf7c4a400ccaa456f62b5a69ae56edf9fe104d1bb0d4110f79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765614628596986809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t9crl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b50a42b7-5b85-4440-b27c-f3a2376cdfac,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGraceP
eriod: 30,},},&Container{Id:3cc9e2f1a4cb6ef6a44404dcb1587a4e43a789750fcf7cfe3525d4470a02049a,PodSandboxId:5666287ef23de91755fcd81697f8d770d1d8f097f014b8fdc5daa078003f6d25,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765614616381250543,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45f5da8a7d034120e63f54164f74715c,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5dbe8cce3d6ce5291793ae044ed8d5541cff87277434acc4b3df5605b7bcb49,PodSandboxId:3a1ef997a680ca0ba3454fc0be74272338e93d10942c95f2a4bbedbe9958a341,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765614616348314996,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfde6bd3d488a9800f2e4971558d5ab9,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"
TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26e3580417c1e7a17a97e5bc733321c2e8a09689aef96d95594f00eb9208bca8,PodSandboxId:7d2cf9d2fda04cba1cdbe9ec1ca44112ef1f66f67d38e63178f17f420da73dc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765614616293712024,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a4e927105679e7941071b339f30dde,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubern
etes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6c211ba7e8e2758a6034624ff132895246615851f7010c72af0d58d5dcc29b4,PodSandboxId:b1eddffe49be62022cc7f3005046ed23d842c0663dfcc83b3b8439048a31322d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765614615893104196,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-917695,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a60b87e63
5a2b52b03e000348992f684,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=72a2967d-c952-420c-84be-c1ca25690f16 name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
1f6d4159bb313 a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c 2 minutes ago Running nginx 0 68a2d4cc7bf7c nginx default
9a463f2fb9267 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 3 minutes ago Running busybox 0 366bee97de5b2 busybox default
a7459c2ce3b80 registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad 3 minutes ago Running controller 0 d54d5ead7c19f ingress-nginx-controller-85d4c799dd-bzgr8 ingress-nginx
76a36cc5fd27a a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e 3 minutes ago Exited patch 1 61a8f89598c8e ingress-nginx-admission-patch-mbzzw ingress-nginx
d2574200a6355 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285 3 minutes ago Exited create 0 b6f69c472fea5 ingress-nginx-admission-create-jwjv9 ingress-nginx
02f24c26b90be docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 4 minutes ago Running minikube-ingress-dns 0 f6e4b3a7f3b2c kube-ingress-dns-minikube kube-system
d6018cebb6981 docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 ded4eed246092 amd-gpu-device-plugin-fv8qk kube-system
5b9fa976e19fa 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 4de8c11681f0a storage-provisioner kube-system
c394f439277ef 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969 4 minutes ago Running coredns 0 165bfc56736b4 coredns-66bc5c9577-qk82t kube-system
bb50ab34a1664 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45 4 minutes ago Running kube-proxy 0 42096456024fa kube-proxy-t9crl kube-system
3cc9e2f1a4cb6 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8 4 minutes ago Running kube-controller-manager 0 5666287ef23de kube-controller-manager-addons-917695 kube-system
c5dbe8cce3d6c a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1 4 minutes ago Running etcd 0 3a1ef997a680c etcd-addons-917695 kube-system
26e3580417c1e a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85 4 minutes ago Running kube-apiserver 0 7d2cf9d2fda04 kube-apiserver-addons-917695 kube-system
d6c211ba7e8e2 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952 4 minutes ago Running kube-scheduler 0 b1eddffe49be6 kube-scheduler-addons-917695 kube-system
==> coredns [c394f439277efeb86bf03ebd00c3259d06c6fc6f983dfbd688bf7df9bbb81d00] <==
[INFO] 10.244.0.8:32806 - 19357 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000394978s
[INFO] 10.244.0.8:32806 - 31307 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000158517s
[INFO] 10.244.0.8:32806 - 33851 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00008586s
[INFO] 10.244.0.8:32806 - 6566 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00010822s
[INFO] 10.244.0.8:32806 - 42666 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000078394s
[INFO] 10.244.0.8:32806 - 41581 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000164022s
[INFO] 10.244.0.8:32806 - 61292 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000113154s
[INFO] 10.244.0.8:51130 - 48536 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000156432s
[INFO] 10.244.0.8:51130 - 48864 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000324038s
[INFO] 10.244.0.8:52414 - 63175 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114249s
[INFO] 10.244.0.8:52414 - 62953 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000126653s
[INFO] 10.244.0.8:47288 - 39992 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094012s
[INFO] 10.244.0.8:47288 - 40228 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000113724s
[INFO] 10.244.0.8:47087 - 7658 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000088475s
[INFO] 10.244.0.8:47087 - 7215 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000201639s
[INFO] 10.244.0.23:46684 - 34471 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000392123s
[INFO] 10.244.0.23:35847 - 16168 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000500956s
[INFO] 10.244.0.23:55738 - 24299 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00010832s
[INFO] 10.244.0.23:60736 - 12659 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000102728s
[INFO] 10.244.0.23:57686 - 17346 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000185445s
[INFO] 10.244.0.23:45048 - 50868 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000081868s
[INFO] 10.244.0.23:48212 - 3347 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001195853s
[INFO] 10.244.0.23:52309 - 55971 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 306 0.001413875s
[INFO] 10.244.0.28:47364 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000419606s
[INFO] 10.244.0.28:59444 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000165626s
==> describe nodes <==
Name: addons-917695
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-917695
kubernetes.io/os=linux
minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
minikube.k8s.io/name=addons-917695
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_13T08_30_23_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-917695
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 13 Dec 2025 08:30:19 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-917695
AcquireTime: <unset>
RenewTime: Sat, 13 Dec 2025 08:34:57 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 13 Dec 2025 08:32:55 +0000 Sat, 13 Dec 2025 08:30:17 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 13 Dec 2025 08:32:55 +0000 Sat, 13 Dec 2025 08:30:17 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 13 Dec 2025 08:32:55 +0000 Sat, 13 Dec 2025 08:30:17 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 13 Dec 2025 08:32:55 +0000 Sat, 13 Dec 2025 08:30:23 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.154
Hostname: addons-917695
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
System Info:
Machine ID: 412eefcb63ce429c917fa5530725ef67
System UUID: 412eefcb-63ce-429c-917f-a5530725ef67
Boot ID: c5eef4a8-274f-4b8e-afb8-04f83410bea1
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.34.2
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (13 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m8s
default hello-world-app-5d498dc89-p9lr2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m23s
ingress-nginx ingress-nginx-controller-85d4c799dd-bzgr8 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m30s
kube-system amd-gpu-device-plugin-fv8qk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m35s
kube-system coredns-66bc5c9577-qk82t 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m39s
kube-system etcd-addons-917695 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m44s
kube-system kube-apiserver-addons-917695 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m44s
kube-system kube-controller-manager-addons-917695 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m44s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m33s
kube-system kube-proxy-t9crl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m39s
kube-system kube-scheduler-addons-917695 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m44s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m32s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m36s kube-proxy
Normal Starting 4m51s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m51s (x8 over 4m51s) kubelet Node addons-917695 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m51s (x8 over 4m51s) kubelet Node addons-917695 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m51s (x7 over 4m51s) kubelet Node addons-917695 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m51s kubelet Updated Node Allocatable limit across pods
Normal Starting 4m44s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m44s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m44s kubelet Node addons-917695 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m44s kubelet Node addons-917695 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m44s kubelet Node addons-917695 status is now: NodeHasSufficientPID
Normal NodeReady 4m43s kubelet Node addons-917695 status is now: NodeReady
Normal RegisteredNode 4m40s node-controller Node addons-917695 event: Registered Node addons-917695 in Controller
==> dmesg <==
[ +0.000019] kauditd_printk_skb: 312 callbacks suppressed
[ +0.426626] kauditd_printk_skb: 323 callbacks suppressed
[ +5.901503] kauditd_printk_skb: 374 callbacks suppressed
[ +6.331895] kauditd_printk_skb: 5 callbacks suppressed
[Dec13 08:31] kauditd_printk_skb: 11 callbacks suppressed
[ +7.857601] kauditd_printk_skb: 32 callbacks suppressed
[ +5.694150] kauditd_printk_skb: 5 callbacks suppressed
[ +5.648703] kauditd_printk_skb: 38 callbacks suppressed
[ +1.842879] kauditd_printk_skb: 121 callbacks suppressed
[ +7.350415] kauditd_printk_skb: 41 callbacks suppressed
[ +0.000096] kauditd_printk_skb: 201 callbacks suppressed
[ +2.215664] kauditd_printk_skb: 65 callbacks suppressed
[ +8.366644] kauditd_printk_skb: 47 callbacks suppressed
[Dec13 08:32] kauditd_printk_skb: 47 callbacks suppressed
[ +11.140707] kauditd_printk_skb: 17 callbacks suppressed
[ +0.000045] kauditd_printk_skb: 22 callbacks suppressed
[ +1.394635] kauditd_printk_skb: 107 callbacks suppressed
[ +0.857730] kauditd_printk_skb: 99 callbacks suppressed
[ +0.000032] kauditd_printk_skb: 103 callbacks suppressed
[ +3.788578] kauditd_printk_skb: 141 callbacks suppressed
[ +4.055829] kauditd_printk_skb: 94 callbacks suppressed
[Dec13 08:33] kauditd_printk_skb: 35 callbacks suppressed
[ +0.462567] kauditd_printk_skb: 91 callbacks suppressed
[ +1.628936] kauditd_printk_skb: 44 callbacks suppressed
[Dec13 08:35] kauditd_printk_skb: 107 callbacks suppressed
==> etcd [c5dbe8cce3d6ce5291793ae044ed8d5541cff87277434acc4b3df5605b7bcb49] <==
{"level":"info","ts":"2025-12-13T08:31:40.192618Z","caller":"traceutil/trace.go:172","msg":"trace[505564146] transaction","detail":"{read_only:false; response_revision:1142; number_of_response:1; }","duration":"129.97278ms","start":"2025-12-13T08:31:40.062404Z","end":"2025-12-13T08:31:40.192377Z","steps":["trace[505564146] 'process raft request' (duration: 129.121809ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-13T08:31:50.909660Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.987607ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
{"level":"info","ts":"2025-12-13T08:31:50.909728Z","caller":"traceutil/trace.go:172","msg":"trace[614981978] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1190; }","duration":"159.061424ms","start":"2025-12-13T08:31:50.750656Z","end":"2025-12-13T08:31:50.909717Z","steps":["trace[614981978] 'range keys from in-memory index tree' (duration: 158.893967ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-13T08:31:50.911937Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.469468ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-13T08:31:50.912001Z","caller":"traceutil/trace.go:172","msg":"trace[905319841] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1190; }","duration":"135.760797ms","start":"2025-12-13T08:31:50.776229Z","end":"2025-12-13T08:31:50.911990Z","steps":["trace[905319841] 'range keys from in-memory index tree' (duration: 133.168659ms)"],"step_count":1}
{"level":"info","ts":"2025-12-13T08:31:54.675777Z","caller":"traceutil/trace.go:172","msg":"trace[1830533199] transaction","detail":"{read_only:false; response_revision:1202; number_of_response:1; }","duration":"145.254738ms","start":"2025-12-13T08:31:54.530509Z","end":"2025-12-13T08:31:54.675764Z","steps":["trace[1830533199] 'process raft request' (duration: 145.152074ms)"],"step_count":1}
{"level":"info","ts":"2025-12-13T08:32:00.516681Z","caller":"traceutil/trace.go:172","msg":"trace[1750467939] transaction","detail":"{read_only:false; response_revision:1234; number_of_response:1; }","duration":"125.984338ms","start":"2025-12-13T08:32:00.390684Z","end":"2025-12-13T08:32:00.516668Z","steps":["trace[1750467939] 'process raft request' (duration: 125.869471ms)"],"step_count":1}
{"level":"info","ts":"2025-12-13T08:32:24.899858Z","caller":"traceutil/trace.go:172","msg":"trace[1998752364] linearizableReadLoop","detail":"{readStateIndex:1433; appliedIndex:1433; }","duration":"114.274356ms","start":"2025-12-13T08:32:24.785555Z","end":"2025-12-13T08:32:24.899830Z","steps":["trace[1998752364] 'read index received' (duration: 114.269921ms)","trace[1998752364] 'applied index is now lower than readState.Index' (duration: 3.776µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-13T08:32:24.900777Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.143526ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/kube-system/nvidia-device-plugin-daemonset-9884d469d\" limit:1 ","response":"range_response_count:1 size:2898"}
{"level":"info","ts":"2025-12-13T08:32:24.900836Z","caller":"traceutil/trace.go:172","msg":"trace[738720322] range","detail":"{range_begin:/registry/controllerrevisions/kube-system/nvidia-device-plugin-daemonset-9884d469d; range_end:; response_count:1; response_revision:1392; }","duration":"115.275408ms","start":"2025-12-13T08:32:24.785552Z","end":"2025-12-13T08:32:24.900828Z","steps":["trace[738720322] 'agreement among raft nodes before linearized reading' (duration: 114.478285ms)"],"step_count":1}
{"level":"info","ts":"2025-12-13T08:32:24.900932Z","caller":"traceutil/trace.go:172","msg":"trace[1335731661] transaction","detail":"{read_only:false; response_revision:1393; number_of_response:1; }","duration":"117.808554ms","start":"2025-12-13T08:32:24.783110Z","end":"2025-12-13T08:32:24.900919Z","steps":["trace[1335731661] 'process raft request' (duration: 116.705848ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-13T08:32:24.901103Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.509378ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-13T08:32:24.901155Z","caller":"traceutil/trace.go:172","msg":"trace[176730814] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1393; }","duration":"115.563323ms","start":"2025-12-13T08:32:24.785586Z","end":"2025-12-13T08:32:24.901149Z","steps":["trace[176730814] 'agreement among raft nodes before linearized reading' (duration: 115.489872ms)"],"step_count":1}
{"level":"info","ts":"2025-12-13T08:32:24.901308Z","caller":"traceutil/trace.go:172","msg":"trace[1924384903] transaction","detail":"{read_only:false; response_revision:1394; number_of_response:1; }","duration":"108.399754ms","start":"2025-12-13T08:32:24.792903Z","end":"2025-12-13T08:32:24.901303Z","steps":["trace[1924384903] 'process raft request' (duration: 108.353136ms)"],"step_count":1}
{"level":"info","ts":"2025-12-13T08:32:28.361580Z","caller":"traceutil/trace.go:172","msg":"trace[352310809] linearizableReadLoop","detail":"{readStateIndex:1453; appliedIndex:1453; }","duration":"225.244207ms","start":"2025-12-13T08:32:28.136316Z","end":"2025-12-13T08:32:28.361560Z","steps":["trace[352310809] 'read index received' (duration: 225.239099ms)","trace[352310809] 'applied index is now lower than readState.Index' (duration: 4.238µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-13T08:32:28.361780Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"225.465855ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" limit:1 ","response":"range_response_count:1 size:822"}
{"level":"info","ts":"2025-12-13T08:32:28.361801Z","caller":"traceutil/trace.go:172","msg":"trace[975671476] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1411; }","duration":"225.50974ms","start":"2025-12-13T08:32:28.136285Z","end":"2025-12-13T08:32:28.361795Z","steps":["trace[975671476] 'agreement among raft nodes before linearized reading' (duration: 225.375061ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-13T08:32:28.362126Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"220.065923ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-13T08:32:28.362148Z","caller":"traceutil/trace.go:172","msg":"trace[1761542371] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1412; }","duration":"220.092226ms","start":"2025-12-13T08:32:28.142050Z","end":"2025-12-13T08:32:28.362143Z","steps":["trace[1761542371] 'agreement among raft nodes before linearized reading' (duration: 220.053769ms)"],"step_count":1}
{"level":"info","ts":"2025-12-13T08:32:28.362343Z","caller":"traceutil/trace.go:172","msg":"trace[1658786718] transaction","detail":"{read_only:false; response_revision:1412; number_of_response:1; }","duration":"242.257919ms","start":"2025-12-13T08:32:28.120077Z","end":"2025-12-13T08:32:28.362335Z","steps":["trace[1658786718] 'process raft request' (duration: 241.916727ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-13T08:32:28.362626Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"168.644999ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-13T08:32:28.362709Z","caller":"traceutil/trace.go:172","msg":"trace[837552846] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1412; }","duration":"168.732597ms","start":"2025-12-13T08:32:28.193970Z","end":"2025-12-13T08:32:28.362703Z","steps":["trace[837552846] 'agreement among raft nodes before linearized reading' (duration: 168.62688ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-13T08:32:28.362924Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"136.664707ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/networkpolicies\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-13T08:32:28.362967Z","caller":"traceutil/trace.go:172","msg":"trace[629439109] range","detail":"{range_begin:/registry/networkpolicies; range_end:; response_count:0; response_revision:1412; }","duration":"136.710728ms","start":"2025-12-13T08:32:28.226251Z","end":"2025-12-13T08:32:28.362962Z","steps":["trace[629439109] 'agreement among raft nodes before linearized reading' (duration: 136.652088ms)"],"step_count":1}
{"level":"info","ts":"2025-12-13T08:32:39.043108Z","caller":"traceutil/trace.go:172","msg":"trace[1544671139] transaction","detail":"{read_only:false; response_revision:1522; number_of_response:1; }","duration":"230.997862ms","start":"2025-12-13T08:32:38.812096Z","end":"2025-12-13T08:32:39.043094Z","steps":["trace[1544671139] 'process raft request' (duration: 230.885189ms)"],"step_count":1}
==> kernel <==
08:35:06 up 5 min, 0 users, load average: 0.66, 1.27, 0.66
Linux addons-917695 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [26e3580417c1e7a17a97e5bc733321c2e8a09689aef96d95594f00eb9208bca8] <==
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I1213 08:31:25.687073 1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E1213 08:32:09.469080 1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8443->192.168.39.1:36178: use of closed network connection
E1213 08:32:09.671962 1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8443->192.168.39.1:56638: use of closed network connection
I1213 08:32:19.112051 1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.236.225"}
I1213 08:32:43.769687 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
I1213 08:32:43.948575 1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.229.196"}
I1213 08:32:47.306127 1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
E1213 08:32:49.865606 1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
I1213 08:33:10.010315 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1213 08:33:10.010603 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1213 08:33:10.030004 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1213 08:33:10.030120 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1213 08:33:10.045825 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1213 08:33:10.045894 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1213 08:33:10.078488 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1213 08:33:10.078644 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1213 08:33:10.096774 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1213 08:33:10.097140 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W1213 08:33:11.032004 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W1213 08:33:11.097372 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W1213 08:33:11.114025 1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
I1213 08:33:26.637879 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I1213 08:35:05.025623 1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.6.177"}
==> kube-controller-manager [3cc9e2f1a4cb6ef6a44404dcb1587a4e43a789750fcf7cfe3525d4470a02049a] <==
E1213 08:33:21.060245 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 08:33:25.860955 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 08:33:25.861985 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
I1213 08:33:26.827320 1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
I1213 08:33:26.827482 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1213 08:33:26.874584 1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
I1213 08:33:26.874638 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
E1213 08:33:27.545198 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 08:33:27.546636 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 08:33:32.520161 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 08:33:32.521343 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 08:33:44.709709 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 08:33:44.710726 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 08:33:46.901692 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 08:33:46.902899 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 08:33:50.096569 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 08:33:50.097679 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 08:34:22.637317 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 08:34:22.638399 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 08:34:24.429646 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 08:34:24.431175 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 08:34:27.099275 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 08:34:27.100737 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 08:34:55.083053 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 08:34:55.084038 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
==> kube-proxy [bb50ab34a166440aa33a57ee5c71f03a547a7abca69be43821c017e4f089d55d] <==
I1213 08:30:29.534238 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1213 08:30:29.639757 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1213 08:30:29.639805 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.154"]
E1213 08:30:29.639937 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1213 08:30:29.822742 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1213 08:30:29.823589 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1213 08:30:29.823647 1 server_linux.go:132] "Using iptables Proxier"
I1213 08:30:29.846107 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1213 08:30:29.846406 1 server.go:527] "Version info" version="v1.34.2"
I1213 08:30:29.846417 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1213 08:30:29.860851 1 config.go:200] "Starting service config controller"
I1213 08:30:29.860879 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1213 08:30:29.860899 1 config.go:106] "Starting endpoint slice config controller"
I1213 08:30:29.860902 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1213 08:30:29.860913 1 config.go:403] "Starting serviceCIDR config controller"
I1213 08:30:29.860916 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1213 08:30:29.870887 1 config.go:309] "Starting node config controller"
I1213 08:30:29.870915 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1213 08:30:29.962309 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1213 08:30:29.962377 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1213 08:30:29.962423 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1213 08:30:29.971059 1 shared_informer.go:356] "Caches are synced" controller="node config"
==> kube-scheduler [d6c211ba7e8e2758a6034624ff132895246615851f7010c72af0d58d5dcc29b4] <==
E1213 08:30:19.728326 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1213 08:30:19.728393 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1213 08:30:19.729187 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1213 08:30:19.729292 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1213 08:30:19.729649 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1213 08:30:19.730072 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1213 08:30:19.730114 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1213 08:30:19.730149 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1213 08:30:19.730241 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1213 08:30:19.730257 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1213 08:30:19.730512 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1213 08:30:19.730527 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1213 08:30:20.669591 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1213 08:30:20.699594 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1213 08:30:20.716527 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1213 08:30:20.734836 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1213 08:30:20.809970 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1213 08:30:20.901428 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1213 08:30:20.919187 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1213 08:30:20.926052 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1213 08:30:20.969424 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1213 08:30:21.060189 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1213 08:30:21.132419 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1213 08:30:21.170997 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
I1213 08:30:24.020280 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Dec 13 08:33:24 addons-917695 kubelet[1506]: I1213 08:33:24.060787 1506 scope.go:117] "RemoveContainer" containerID="8b101398a88e9b37bc69d87389b2b45ed02bc301f5b50a322d07c7f5e6f56df8"
Dec 13 08:33:24 addons-917695 kubelet[1506]: I1213 08:33:24.402682 1506 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Dec 13 08:33:32 addons-917695 kubelet[1506]: E1213 08:33:32.761285 1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765614812760852921 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
Dec 13 08:33:32 addons-917695 kubelet[1506]: E1213 08:33:32.761308 1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765614812760852921 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
Dec 13 08:33:42 addons-917695 kubelet[1506]: E1213 08:33:42.766296 1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765614822765625470 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
Dec 13 08:33:42 addons-917695 kubelet[1506]: E1213 08:33:42.766332 1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765614822765625470 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
Dec 13 08:33:52 addons-917695 kubelet[1506]: E1213 08:33:52.776195 1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765614832773888369 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
Dec 13 08:33:52 addons-917695 kubelet[1506]: E1213 08:33:52.776599 1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765614832773888369 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
Dec 13 08:34:02 addons-917695 kubelet[1506]: E1213 08:34:02.779755 1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765614842779263293 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
Dec 13 08:34:02 addons-917695 kubelet[1506]: E1213 08:34:02.779796 1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765614842779263293 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
Dec 13 08:34:12 addons-917695 kubelet[1506]: E1213 08:34:12.782994 1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765614852782385078 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
Dec 13 08:34:12 addons-917695 kubelet[1506]: E1213 08:34:12.783024 1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765614852782385078 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
Dec 13 08:34:16 addons-917695 kubelet[1506]: I1213 08:34:16.403647 1506 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-fv8qk" secret="" err="secret \"gcp-auth\" not found"
Dec 13 08:34:22 addons-917695 kubelet[1506]: E1213 08:34:22.786470 1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765614862785967196 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
Dec 13 08:34:22 addons-917695 kubelet[1506]: E1213 08:34:22.786496 1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765614862785967196 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
Dec 13 08:34:32 addons-917695 kubelet[1506]: E1213 08:34:32.789381 1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765614872788823829 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
Dec 13 08:34:32 addons-917695 kubelet[1506]: E1213 08:34:32.789415 1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765614872788823829 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
Dec 13 08:34:34 addons-917695 kubelet[1506]: I1213 08:34:34.402587 1506 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Dec 13 08:34:42 addons-917695 kubelet[1506]: E1213 08:34:42.792863 1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765614882792333898 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
Dec 13 08:34:42 addons-917695 kubelet[1506]: E1213 08:34:42.792897 1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765614882792333898 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
Dec 13 08:34:52 addons-917695 kubelet[1506]: E1213 08:34:52.796002 1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765614892795519117 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
Dec 13 08:34:52 addons-917695 kubelet[1506]: E1213 08:34:52.796028 1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765614892795519117 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
Dec 13 08:35:02 addons-917695 kubelet[1506]: E1213 08:35:02.799151 1506 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765614902798684390 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
Dec 13 08:35:02 addons-917695 kubelet[1506]: E1213 08:35:02.799191 1506 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765614902798684390 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:546838} inodes_used:{value:187}}"
Dec 13 08:35:04 addons-917695 kubelet[1506]: I1213 08:35:04.971802 1506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85vmr\" (UniqueName: \"kubernetes.io/projected/e008d43b-2b4f-48ac-aa3e-45941b2bbf49-kube-api-access-85vmr\") pod \"hello-world-app-5d498dc89-p9lr2\" (UID: \"e008d43b-2b4f-48ac-aa3e-45941b2bbf49\") " pod="default/hello-world-app-5d498dc89-p9lr2"
==> storage-provisioner [5b9fa976e19fa711cb458e6215fbe3e10fe811f008e25fe3c430cca26ed33945] <==
W1213 08:34:42.205658 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 08:34:44.209507 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 08:34:44.218934 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 08:34:46.223221 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 08:34:46.231358 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 08:34:48.234613 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 08:34:48.240042 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 08:34:50.243389 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 08:34:50.249229 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 08:34:52.254377 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 08:34:52.260057 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 08:34:54.263803 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 08:34:54.269799 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 08:34:56.273574 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 08:34:56.279004 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 08:34:58.283575 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 08:34:58.289241 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 08:35:00.293006 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 08:35:00.298767 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 08:35:02.303026 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 08:35:02.308977 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 08:35:04.312849 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 08:35:04.321592 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 08:35:06.326723 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 08:35:06.335950 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-917695 -n addons-917695
helpers_test.go:270: (dbg) Run: kubectl --context addons-917695 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-p9lr2 ingress-nginx-admission-create-jwjv9 ingress-nginx-admission-patch-mbzzw
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run: kubectl --context addons-917695 describe pod hello-world-app-5d498dc89-p9lr2 ingress-nginx-admission-create-jwjv9 ingress-nginx-admission-patch-mbzzw
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-917695 describe pod hello-world-app-5d498dc89-p9lr2 ingress-nginx-admission-create-jwjv9 ingress-nginx-admission-patch-mbzzw: exit status 1 (76.15838ms)
-- stdout --
Name: hello-world-app-5d498dc89-p9lr2
Namespace: default
Priority: 0
Service Account: default
Node: addons-917695/192.168.39.154
Start Time: Sat, 13 Dec 2025 08:35:04 +0000
Labels: app=hello-world-app
pod-template-hash=5d498dc89
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-world-app-5d498dc89
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85vmr (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-85vmr:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3s default-scheduler Successfully assigned default/hello-world-app-5d498dc89-p9lr2 to addons-917695
Normal Pulling 2s kubelet Pulling image "docker.io/kicbase/echo-server:1.0"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-jwjv9" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-mbzzw" not found
** /stderr **
helpers_test.go:288: kubectl --context addons-917695 describe pod hello-world-app-5d498dc89-p9lr2 ingress-nginx-admission-create-jwjv9 ingress-nginx-admission-patch-mbzzw: exit status 1
addons_test.go:1055: (dbg) Run: out/minikube-linux-amd64 -p addons-917695 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-917695 addons disable ingress-dns --alsologtostderr -v=1: (1.669384811s)
addons_test.go:1055: (dbg) Run: out/minikube-linux-amd64 -p addons-917695 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-917695 addons disable ingress --alsologtostderr -v=1: (7.709223808s)
--- FAIL: TestAddons/parallel/Ingress (153.15s)