=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run: kubectl --context addons-618522 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run: kubectl --context addons-618522 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run: kubectl --context addons-618522 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [1d05c5f3-11c3-43f8-871c-1feba1d97857] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [1d05c5f3-11c3-43f8-871c-1feba1d97857] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.01067809s
I1206 08:32:05.268420 9552 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run: out/minikube-linux-amd64 -p addons-618522 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-618522 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.961407295s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run: kubectl --context addons-618522 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run: out/minikube-linux-amd64 -p addons-618522 ip
addons_test.go:299: (dbg) Run: nslookup hello-john.test 192.168.39.168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-618522 -n addons-618522
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p addons-618522 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-618522 logs -n 25: (1.272770468s)
helpers_test.go:260: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ delete │ -p download-only-807354 │ download-only-807354 │ jenkins │ v1.37.0 │ 06 Dec 25 08:29 UTC │ 06 Dec 25 08:29 UTC │
│ start │ --download-only -p binary-mirror-499439 --alsologtostderr --binary-mirror http://127.0.0.1:45531 --driver=kvm2 --container-runtime=crio │ binary-mirror-499439 │ jenkins │ v1.37.0 │ 06 Dec 25 08:29 UTC │ │
│ delete │ -p binary-mirror-499439 │ binary-mirror-499439 │ jenkins │ v1.37.0 │ 06 Dec 25 08:29 UTC │ 06 Dec 25 08:29 UTC │
│ addons │ disable dashboard -p addons-618522 │ addons-618522 │ jenkins │ v1.37.0 │ 06 Dec 25 08:29 UTC │ │
│ addons │ enable dashboard -p addons-618522 │ addons-618522 │ jenkins │ v1.37.0 │ 06 Dec 25 08:29 UTC │ │
│ start │ -p addons-618522 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-618522 │ jenkins │ v1.37.0 │ 06 Dec 25 08:29 UTC │ 06 Dec 25 08:31 UTC │
│ addons │ addons-618522 addons disable volcano --alsologtostderr -v=1 │ addons-618522 │ jenkins │ v1.37.0 │ 06 Dec 25 08:31 UTC │ 06 Dec 25 08:31 UTC │
│ addons │ addons-618522 addons disable gcp-auth --alsologtostderr -v=1 │ addons-618522 │ jenkins │ v1.37.0 │ 06 Dec 25 08:31 UTC │ 06 Dec 25 08:31 UTC │
│ addons │ enable headlamp -p addons-618522 --alsologtostderr -v=1 │ addons-618522 │ jenkins │ v1.37.0 │ 06 Dec 25 08:31 UTC │ 06 Dec 25 08:31 UTC │
│ addons │ addons-618522 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-618522 │ jenkins │ v1.37.0 │ 06 Dec 25 08:31 UTC │ 06 Dec 25 08:31 UTC │
│ addons │ addons-618522 addons disable metrics-server --alsologtostderr -v=1 │ addons-618522 │ jenkins │ v1.37.0 │ 06 Dec 25 08:31 UTC │ 06 Dec 25 08:31 UTC │
│ addons │ addons-618522 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-618522 │ jenkins │ v1.37.0 │ 06 Dec 25 08:31 UTC │ 06 Dec 25 08:32 UTC │
│ addons │ addons-618522 addons disable headlamp --alsologtostderr -v=1 │ addons-618522 │ jenkins │ v1.37.0 │ 06 Dec 25 08:32 UTC │ 06 Dec 25 08:32 UTC │
│ ip │ addons-618522 ip │ addons-618522 │ jenkins │ v1.37.0 │ 06 Dec 25 08:32 UTC │ 06 Dec 25 08:32 UTC │
│ addons │ addons-618522 addons disable registry --alsologtostderr -v=1 │ addons-618522 │ jenkins │ v1.37.0 │ 06 Dec 25 08:32 UTC │ 06 Dec 25 08:32 UTC │
│ ssh │ addons-618522 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-618522 │ jenkins │ v1.37.0 │ 06 Dec 25 08:32 UTC │ │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-618522 │ addons-618522 │ jenkins │ v1.37.0 │ 06 Dec 25 08:32 UTC │ 06 Dec 25 08:32 UTC │
│ addons │ addons-618522 addons disable registry-creds --alsologtostderr -v=1 │ addons-618522 │ jenkins │ v1.37.0 │ 06 Dec 25 08:32 UTC │ 06 Dec 25 08:32 UTC │
│ addons │ addons-618522 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-618522 │ jenkins │ v1.37.0 │ 06 Dec 25 08:32 UTC │ 06 Dec 25 08:32 UTC │
│ addons │ addons-618522 addons disable yakd --alsologtostderr -v=1 │ addons-618522 │ jenkins │ v1.37.0 │ 06 Dec 25 08:32 UTC │ 06 Dec 25 08:32 UTC │
│ ssh │ addons-618522 ssh cat /opt/local-path-provisioner/pvc-c8bb1d8f-4c87-4fdb-8a4a-d380c7c73589_default_test-pvc/file1 │ addons-618522 │ jenkins │ v1.37.0 │ 06 Dec 25 08:32 UTC │ 06 Dec 25 08:32 UTC │
│ addons │ addons-618522 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-618522 │ jenkins │ v1.37.0 │ 06 Dec 25 08:32 UTC │ 06 Dec 25 08:32 UTC │
│ addons │ addons-618522 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-618522 │ jenkins │ v1.37.0 │ 06 Dec 25 08:33 UTC │ 06 Dec 25 08:33 UTC │
│ addons │ addons-618522 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-618522 │ jenkins │ v1.37.0 │ 06 Dec 25 08:33 UTC │ 06 Dec 25 08:33 UTC │
│ ip │ addons-618522 ip │ addons-618522 │ jenkins │ v1.37.0 │ 06 Dec 25 08:34 UTC │ 06 Dec 25 08:34 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/06 08:29:05
Running on machine: ubuntu-20-agent-11
Binary: Built with gc go1.25.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1206 08:29:05.698070 10525 out.go:360] Setting OutFile to fd 1 ...
I1206 08:29:05.698178 10525 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:29:05.698182 10525 out.go:374] Setting ErrFile to fd 2...
I1206 08:29:05.698187 10525 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:29:05.698396 10525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
I1206 08:29:05.698928 10525 out.go:368] Setting JSON to false
I1206 08:29:05.699711 10525 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":688,"bootTime":1765009058,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1206 08:29:05.699776 10525 start.go:143] virtualization: kvm guest
I1206 08:29:05.701836 10525 out.go:179] * [addons-618522] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1206 08:29:05.703286 10525 out.go:179] - MINIKUBE_LOCATION=22049
I1206 08:29:05.703296 10525 notify.go:221] Checking for updates...
I1206 08:29:05.705593 10525 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1206 08:29:05.706685 10525 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22049-5603/kubeconfig
I1206 08:29:05.707739 10525 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5603/.minikube
I1206 08:29:05.708774 10525 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1206 08:29:05.709883 10525 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1206 08:29:05.711084 10525 driver.go:422] Setting default libvirt URI to qemu:///system
I1206 08:29:05.741890 10525 out.go:179] * Using the kvm2 driver based on user configuration
I1206 08:29:05.742910 10525 start.go:309] selected driver: kvm2
I1206 08:29:05.742926 10525 start.go:927] validating driver "kvm2" against <nil>
I1206 08:29:05.742943 10525 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1206 08:29:05.743959 10525 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1206 08:29:05.744281 10525 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1206 08:29:05.744326 10525 cni.go:84] Creating CNI manager for ""
I1206 08:29:05.744379 10525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1206 08:29:05.744391 10525 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1206 08:29:05.744437 10525 start.go:353] cluster config:
{Name:addons-618522 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-618522 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
I1206 08:29:05.744578 10525 iso.go:125] acquiring lock: {Name:mk30cf35cfaf5c28a2b5f78c7b431de5eb8c8e82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1206 08:29:05.746546 10525 out.go:179] * Starting "addons-618522" primary control-plane node in "addons-618522" cluster
I1206 08:29:05.747565 10525 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1206 08:29:05.747593 10525 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
I1206 08:29:05.747610 10525 cache.go:65] Caching tarball of preloaded images
I1206 08:29:05.747697 10525 preload.go:238] Found /home/jenkins/minikube-integration/22049-5603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1206 08:29:05.747708 10525 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
I1206 08:29:05.747989 10525 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/config.json ...
I1206 08:29:05.748058 10525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/config.json: {Name:mk7f9da94ca10d314b801d8105975097da70fef6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 08:29:05.748190 10525 start.go:360] acquireMachinesLock for addons-618522: {Name:mk3342af5720fb96b5115fa945410cab4f7bd1fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1206 08:29:05.748231 10525 start.go:364] duration metric: took 28.823µs to acquireMachinesLock for "addons-618522"
I1206 08:29:05.748248 10525 start.go:93] Provisioning new machine with config: &{Name:addons-618522 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-618522 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1206 08:29:05.748294 10525 start.go:125] createHost starting for "" (driver="kvm2")
I1206 08:29:05.749716 10525 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1206 08:29:05.749861 10525 start.go:159] libmachine.API.Create for "addons-618522" (driver="kvm2")
I1206 08:29:05.749888 10525 client.go:173] LocalClient.Create starting
I1206 08:29:05.749978 10525 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca.pem
I1206 08:29:05.781012 10525 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/cert.pem
I1206 08:29:05.906650 10525 main.go:143] libmachine: creating domain...
I1206 08:29:05.906675 10525 main.go:143] libmachine: creating network...
I1206 08:29:05.908021 10525 main.go:143] libmachine: found existing default network
I1206 08:29:05.908193 10525 main.go:143] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1206 08:29:05.908727 10525 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00157fc80}
I1206 08:29:05.908828 10525 main.go:143] libmachine: defining private network:
<network>
<name>mk-addons-618522</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1206 08:29:05.914620 10525 main.go:143] libmachine: creating private network mk-addons-618522 192.168.39.0/24...
I1206 08:29:05.983441 10525 main.go:143] libmachine: private network mk-addons-618522 192.168.39.0/24 created
I1206 08:29:05.983758 10525 main.go:143] libmachine: <network>
<name>mk-addons-618522</name>
<uuid>b78eb98a-a065-4470-8e55-ee6c47b15f2f</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:7c:55:b3'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1206 08:29:05.983788 10525 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522 ...
I1206 08:29:05.983815 10525 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22049-5603/.minikube/cache/iso/amd64/minikube-v1.37.0-1764843329-22032-amd64.iso
I1206 08:29:05.983827 10525 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22049-5603/.minikube
I1206 08:29:05.983908 10525 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22049-5603/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22049-5603/.minikube/cache/iso/amd64/minikube-v1.37.0-1764843329-22032-amd64.iso...
I1206 08:29:06.269048 10525 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa...
I1206 08:29:06.417744 10525 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/addons-618522.rawdisk...
I1206 08:29:06.417791 10525 main.go:143] libmachine: Writing magic tar header
I1206 08:29:06.417812 10525 main.go:143] libmachine: Writing SSH key tar header
I1206 08:29:06.417883 10525 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522 ...
I1206 08:29:06.417945 10525 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522
I1206 08:29:06.417999 10525 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522 (perms=drwx------)
I1206 08:29:06.418017 10525 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22049-5603/.minikube/machines
I1206 08:29:06.418026 10525 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22049-5603/.minikube/machines (perms=drwxr-xr-x)
I1206 08:29:06.418037 10525 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22049-5603/.minikube
I1206 08:29:06.418048 10525 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22049-5603/.minikube (perms=drwxr-xr-x)
I1206 08:29:06.418058 10525 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22049-5603
I1206 08:29:06.418076 10525 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22049-5603 (perms=drwxrwxr-x)
I1206 08:29:06.418086 10525 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1206 08:29:06.418096 10525 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1206 08:29:06.418106 10525 main.go:143] libmachine: checking permissions on dir: /home/jenkins
I1206 08:29:06.418115 10525 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1206 08:29:06.418124 10525 main.go:143] libmachine: checking permissions on dir: /home
I1206 08:29:06.418133 10525 main.go:143] libmachine: skipping /home - not owner
I1206 08:29:06.418137 10525 main.go:143] libmachine: defining domain...
I1206 08:29:06.419460 10525 main.go:143] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-618522</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/addons-618522.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-618522'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1206 08:29:06.426746 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:3d:72:3b in network default
I1206 08:29:06.427302 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:06.427320 10525 main.go:143] libmachine: starting domain...
I1206 08:29:06.427327 10525 main.go:143] libmachine: ensuring networks are active...
I1206 08:29:06.427982 10525 main.go:143] libmachine: Ensuring network default is active
I1206 08:29:06.428312 10525 main.go:143] libmachine: Ensuring network mk-addons-618522 is active
I1206 08:29:06.428896 10525 main.go:143] libmachine: getting domain XML...
I1206 08:29:06.429878 10525 main.go:143] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-618522</name>
<uuid>57f399cc-dddf-4d4f-b1df-b1180b83c0f4</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/addons-618522.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:96:93:89'/>
<source network='mk-addons-618522'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:3d:72:3b'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1206 08:29:07.715375 10525 main.go:143] libmachine: waiting for domain to start...
I1206 08:29:07.716792 10525 main.go:143] libmachine: domain is now running
I1206 08:29:07.716811 10525 main.go:143] libmachine: waiting for IP...
I1206 08:29:07.717542 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:07.717956 10525 main.go:143] libmachine: no network interface addresses found for domain addons-618522 (source=lease)
I1206 08:29:07.717998 10525 main.go:143] libmachine: trying to list again with source=arp
I1206 08:29:07.718255 10525 main.go:143] libmachine: unable to find current IP address of domain addons-618522 in network mk-addons-618522 (interfaces detected: [])
I1206 08:29:07.718297 10525 retry.go:31] will retry after 266.106603ms: waiting for domain to come up
I1206 08:29:07.985832 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:07.986398 10525 main.go:143] libmachine: no network interface addresses found for domain addons-618522 (source=lease)
I1206 08:29:07.986415 10525 main.go:143] libmachine: trying to list again with source=arp
I1206 08:29:07.986761 10525 main.go:143] libmachine: unable to find current IP address of domain addons-618522 in network mk-addons-618522 (interfaces detected: [])
I1206 08:29:07.986801 10525 retry.go:31] will retry after 387.267266ms: waiting for domain to come up
I1206 08:29:08.375586 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:08.376137 10525 main.go:143] libmachine: no network interface addresses found for domain addons-618522 (source=lease)
I1206 08:29:08.376158 10525 main.go:143] libmachine: trying to list again with source=arp
I1206 08:29:08.376529 10525 main.go:143] libmachine: unable to find current IP address of domain addons-618522 in network mk-addons-618522 (interfaces detected: [])
I1206 08:29:08.376580 10525 retry.go:31] will retry after 331.631857ms: waiting for domain to come up
I1206 08:29:08.710026 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:08.710480 10525 main.go:143] libmachine: no network interface addresses found for domain addons-618522 (source=lease)
I1206 08:29:08.710494 10525 main.go:143] libmachine: trying to list again with source=arp
I1206 08:29:08.710731 10525 main.go:143] libmachine: unable to find current IP address of domain addons-618522 in network mk-addons-618522 (interfaces detected: [])
I1206 08:29:08.710763 10525 retry.go:31] will retry after 523.998005ms: waiting for domain to come up
I1206 08:29:09.236544 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:09.237018 10525 main.go:143] libmachine: no network interface addresses found for domain addons-618522 (source=lease)
I1206 08:29:09.237031 10525 main.go:143] libmachine: trying to list again with source=arp
I1206 08:29:09.237270 10525 main.go:143] libmachine: unable to find current IP address of domain addons-618522 in network mk-addons-618522 (interfaces detected: [])
I1206 08:29:09.237299 10525 retry.go:31] will retry after 650.549091ms: waiting for domain to come up
I1206 08:29:09.889019 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:09.889513 10525 main.go:143] libmachine: no network interface addresses found for domain addons-618522 (source=lease)
I1206 08:29:09.889526 10525 main.go:143] libmachine: trying to list again with source=arp
I1206 08:29:09.889818 10525 main.go:143] libmachine: unable to find current IP address of domain addons-618522 in network mk-addons-618522 (interfaces detected: [])
I1206 08:29:09.889851 10525 retry.go:31] will retry after 683.637032ms: waiting for domain to come up
I1206 08:29:10.574615 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:10.575246 10525 main.go:143] libmachine: no network interface addresses found for domain addons-618522 (source=lease)
I1206 08:29:10.575261 10525 main.go:143] libmachine: trying to list again with source=arp
I1206 08:29:10.575593 10525 main.go:143] libmachine: unable to find current IP address of domain addons-618522 in network mk-addons-618522 (interfaces detected: [])
I1206 08:29:10.575627 10525 retry.go:31] will retry after 1.146917189s: waiting for domain to come up
I1206 08:29:11.724481 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:11.724948 10525 main.go:143] libmachine: no network interface addresses found for domain addons-618522 (source=lease)
I1206 08:29:11.724969 10525 main.go:143] libmachine: trying to list again with source=arp
I1206 08:29:11.725218 10525 main.go:143] libmachine: unable to find current IP address of domain addons-618522 in network mk-addons-618522 (interfaces detected: [])
I1206 08:29:11.725254 10525 retry.go:31] will retry after 1.046923271s: waiting for domain to come up
I1206 08:29:12.773594 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:12.774131 10525 main.go:143] libmachine: no network interface addresses found for domain addons-618522 (source=lease)
I1206 08:29:12.774148 10525 main.go:143] libmachine: trying to list again with source=arp
I1206 08:29:12.774421 10525 main.go:143] libmachine: unable to find current IP address of domain addons-618522 in network mk-addons-618522 (interfaces detected: [])
I1206 08:29:12.774458 10525 retry.go:31] will retry after 1.269020208s: waiting for domain to come up
I1206 08:29:14.044811 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:14.045348 10525 main.go:143] libmachine: no network interface addresses found for domain addons-618522 (source=lease)
I1206 08:29:14.045364 10525 main.go:143] libmachine: trying to list again with source=arp
I1206 08:29:14.045622 10525 main.go:143] libmachine: unable to find current IP address of domain addons-618522 in network mk-addons-618522 (interfaces detected: [])
I1206 08:29:14.045656 10525 retry.go:31] will retry after 1.538945073s: waiting for domain to come up
I1206 08:29:15.586482 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:15.587146 10525 main.go:143] libmachine: no network interface addresses found for domain addons-618522 (source=lease)
I1206 08:29:15.587161 10525 main.go:143] libmachine: trying to list again with source=arp
I1206 08:29:15.587443 10525 main.go:143] libmachine: unable to find current IP address of domain addons-618522 in network mk-addons-618522 (interfaces detected: [])
I1206 08:29:15.587502 10525 retry.go:31] will retry after 2.905373773s: waiting for domain to come up
I1206 08:29:18.496453 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:18.497022 10525 main.go:143] libmachine: no network interface addresses found for domain addons-618522 (source=lease)
I1206 08:29:18.497037 10525 main.go:143] libmachine: trying to list again with source=arp
I1206 08:29:18.497352 10525 main.go:143] libmachine: unable to find current IP address of domain addons-618522 in network mk-addons-618522 (interfaces detected: [])
I1206 08:29:18.497382 10525 retry.go:31] will retry after 2.524389877s: waiting for domain to come up
I1206 08:29:21.023815 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:21.024226 10525 main.go:143] libmachine: no network interface addresses found for domain addons-618522 (source=lease)
I1206 08:29:21.024238 10525 main.go:143] libmachine: trying to list again with source=arp
I1206 08:29:21.024516 10525 main.go:143] libmachine: unable to find current IP address of domain addons-618522 in network mk-addons-618522 (interfaces detected: [])
I1206 08:29:21.024549 10525 retry.go:31] will retry after 3.429567982s: waiting for domain to come up
I1206 08:29:24.458105 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:24.458643 10525 main.go:143] libmachine: domain addons-618522 has current primary IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:24.458659 10525 main.go:143] libmachine: found domain IP: 192.168.39.168
I1206 08:29:24.458671 10525 main.go:143] libmachine: reserving static IP address...
I1206 08:29:24.459027 10525 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-618522", mac: "52:54:00:96:93:89", ip: "192.168.39.168"} in network mk-addons-618522
I1206 08:29:24.643084 10525 main.go:143] libmachine: reserved static IP address 192.168.39.168 for domain addons-618522
I1206 08:29:24.643103 10525 main.go:143] libmachine: waiting for SSH...
I1206 08:29:24.643109 10525 main.go:143] libmachine: Getting to WaitForSSH function...
I1206 08:29:24.645843 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:24.646325 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:minikube Clientid:01:52:54:00:96:93:89}
I1206 08:29:24.646350 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:24.646548 10525 main.go:143] libmachine: Using SSH client type: native
I1206 08:29:24.646796 10525 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.168 22 <nil> <nil>}
I1206 08:29:24.646808 10525 main.go:143] libmachine: About to run SSH command:
exit 0
I1206 08:29:24.752319 10525 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1206 08:29:24.752686 10525 main.go:143] libmachine: domain creation complete
I1206 08:29:24.754083 10525 machine.go:94] provisionDockerMachine start ...
I1206 08:29:24.756392 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:24.756799 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:24.756826 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:24.757006 10525 main.go:143] libmachine: Using SSH client type: native
I1206 08:29:24.757244 10525 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.168 22 <nil> <nil>}
I1206 08:29:24.757258 10525 main.go:143] libmachine: About to run SSH command:
hostname
I1206 08:29:24.862389 10525 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
I1206 08:29:24.862416 10525 buildroot.go:166] provisioning hostname "addons-618522"
I1206 08:29:24.865315 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:24.865731 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:24.865759 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:24.865968 10525 main.go:143] libmachine: Using SSH client type: native
I1206 08:29:24.866252 10525 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.168 22 <nil> <nil>}
I1206 08:29:24.866270 10525 main.go:143] libmachine: About to run SSH command:
sudo hostname addons-618522 && echo "addons-618522" | sudo tee /etc/hostname
I1206 08:29:24.990206 10525 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-618522
I1206 08:29:24.993228 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:24.993648 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:24.993676 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:24.993846 10525 main.go:143] libmachine: Using SSH client type: native
I1206 08:29:24.994072 10525 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.168 22 <nil> <nil>}
I1206 08:29:24.994097 10525 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-618522' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-618522/g' /etc/hosts;
else
echo '127.0.1.1 addons-618522' | sudo tee -a /etc/hosts;
fi
fi
I1206 08:29:25.109349 10525 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1206 08:29:25.109375 10525 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22049-5603/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-5603/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-5603/.minikube}
I1206 08:29:25.109396 10525 buildroot.go:174] setting up certificates
I1206 08:29:25.109406 10525 provision.go:84] configureAuth start
I1206 08:29:25.112095 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:25.112506 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:25.112527 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:25.114758 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:25.115096 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:25.115121 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:25.115325 10525 provision.go:143] copyHostCerts
I1206 08:29:25.115395 10525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-5603/.minikube/key.pem (1675 bytes)
I1206 08:29:25.115566 10525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-5603/.minikube/ca.pem (1082 bytes)
I1206 08:29:25.115657 10525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-5603/.minikube/cert.pem (1123 bytes)
I1206 08:29:25.115727 10525 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-5603/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca-key.pem org=jenkins.addons-618522 san=[127.0.0.1 192.168.39.168 addons-618522 localhost minikube]
I1206 08:29:25.171718 10525 provision.go:177] copyRemoteCerts
I1206 08:29:25.171790 10525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1206 08:29:25.174140 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:25.174486 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:25.174512 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:25.174644 10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
I1206 08:29:25.259935 10525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1206 08:29:25.292568 10525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1206 08:29:25.324357 10525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1206 08:29:25.356329 10525 provision.go:87] duration metric: took 246.907063ms to configureAuth
I1206 08:29:25.356390 10525 buildroot.go:189] setting minikube options for container-runtime
I1206 08:29:25.356576 10525 config.go:182] Loaded profile config "addons-618522": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 08:29:25.359698 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:25.360097 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:25.360124 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:25.360343 10525 main.go:143] libmachine: Using SSH client type: native
I1206 08:29:25.360552 10525 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.168 22 <nil> <nil>}
I1206 08:29:25.360567 10525 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1206 08:29:25.598339 10525 main.go:143] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1206 08:29:25.598372 10525 machine.go:97] duration metric: took 844.27102ms to provisionDockerMachine
I1206 08:29:25.598386 10525 client.go:176] duration metric: took 19.848491145s to LocalClient.Create
I1206 08:29:25.598407 10525 start.go:167] duration metric: took 19.848544009s to libmachine.API.Create "addons-618522"
I1206 08:29:25.598419 10525 start.go:293] postStartSetup for "addons-618522" (driver="kvm2")
I1206 08:29:25.598433 10525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1206 08:29:25.598536 10525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1206 08:29:25.601525 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:25.601870 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:25.601893 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:25.602008 10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
I1206 08:29:25.686355 10525 ssh_runner.go:195] Run: cat /etc/os-release
I1206 08:29:25.691695 10525 info.go:137] Remote host: Buildroot 2025.02
I1206 08:29:25.691717 10525 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5603/.minikube/addons for local assets ...
I1206 08:29:25.691787 10525 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5603/.minikube/files for local assets ...
I1206 08:29:25.691810 10525 start.go:296] duration metric: took 93.384984ms for postStartSetup
I1206 08:29:25.694779 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:25.695171 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:25.695194 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:25.695451 10525 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/config.json ...
I1206 08:29:25.695673 10525 start.go:128] duration metric: took 19.947368476s to createHost
I1206 08:29:25.697762 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:25.698238 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:25.698262 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:25.698507 10525 main.go:143] libmachine: Using SSH client type: native
I1206 08:29:25.698700 10525 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.168 22 <nil> <nil>}
I1206 08:29:25.698711 10525 main.go:143] libmachine: About to run SSH command:
date +%s.%N
I1206 08:29:25.804432 10525 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765009765.765707364
I1206 08:29:25.804454 10525 fix.go:216] guest clock: 1765009765.765707364
I1206 08:29:25.804463 10525 fix.go:229] Guest: 2025-12-06 08:29:25.765707364 +0000 UTC Remote: 2025-12-06 08:29:25.695686605 +0000 UTC m=+20.045537162 (delta=70.020759ms)
I1206 08:29:25.804509 10525 fix.go:200] guest clock delta is within tolerance: 70.020759ms
I1206 08:29:25.804516 10525 start.go:83] releasing machines lock for "addons-618522", held for 20.056273909s
I1206 08:29:25.807260 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:25.807668 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:25.807692 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:25.808209 10525 ssh_runner.go:195] Run: cat /version.json
I1206 08:29:25.808309 10525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1206 08:29:25.811241 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:25.811455 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:25.811672 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:25.811699 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:25.811849 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:25.811851 10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
I1206 08:29:25.811878 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:25.812071 10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
I1206 08:29:25.917122 10525 ssh_runner.go:195] Run: systemctl --version
I1206 08:29:25.923910 10525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1206 08:29:26.085074 10525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1206 08:29:26.092288 10525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1206 08:29:26.092354 10525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1206 08:29:26.113649 10525 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1206 08:29:26.113673 10525 start.go:496] detecting cgroup driver to use...
I1206 08:29:26.113730 10525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1206 08:29:26.133784 10525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1206 08:29:26.151929 10525 docker.go:218] disabling cri-docker service (if available) ...
I1206 08:29:26.151994 10525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1206 08:29:26.170197 10525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1206 08:29:26.187579 10525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1206 08:29:26.329201 10525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1206 08:29:26.535429 10525 docker.go:234] disabling docker service ...
I1206 08:29:26.535526 10525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1206 08:29:26.552653 10525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1206 08:29:26.568392 10525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1206 08:29:26.726802 10525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1206 08:29:26.871713 10525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1206 08:29:26.889256 10525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1206 08:29:26.913635 10525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1206 08:29:26.913710 10525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1206 08:29:26.926424 10525 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1206 08:29:26.926495 10525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1206 08:29:26.940063 10525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1206 08:29:26.952623 10525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1206 08:29:26.965438 10525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1206 08:29:26.979310 10525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1206 08:29:26.991973 10525 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1206 08:29:27.014089 10525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1206 08:29:27.027675 10525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1206 08:29:27.038749 10525 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1206 08:29:27.038822 10525 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1206 08:29:27.063671 10525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1206 08:29:27.079524 10525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1206 08:29:27.223133 10525 ssh_runner.go:195] Run: sudo systemctl restart crio
I1206 08:29:27.335179 10525 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1206 08:29:27.335298 10525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1206 08:29:27.341359 10525 start.go:564] Will wait 60s for crictl version
I1206 08:29:27.341445 10525 ssh_runner.go:195] Run: which crictl
I1206 08:29:27.345788 10525 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1206 08:29:27.383352 10525 start.go:580] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1206 08:29:27.383504 10525 ssh_runner.go:195] Run: crio --version
I1206 08:29:27.413774 10525 ssh_runner.go:195] Run: crio --version
I1206 08:29:27.446797 10525 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
I1206 08:29:27.450690 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:27.451086 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:27.451114 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:27.451304 10525 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1206 08:29:27.456537 10525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1206 08:29:27.472945 10525 kubeadm.go:884] updating cluster {Name:addons-618522 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-618522 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1206 08:29:27.473098 10525 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1206 08:29:27.473164 10525 ssh_runner.go:195] Run: sudo crictl images --output json
I1206 08:29:27.505062 10525 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
I1206 08:29:27.505133 10525 ssh_runner.go:195] Run: which lz4
I1206 08:29:27.509780 10525 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1206 08:29:27.514613 10525 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1206 08:29:27.514652 10525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
I1206 08:29:28.842414 10525 crio.go:462] duration metric: took 1.332662154s to copy over tarball
I1206 08:29:28.842504 10525 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1206 08:29:30.428560 10525 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.586023264s)
I1206 08:29:30.428592 10525 crio.go:469] duration metric: took 1.586151495s to extract the tarball
I1206 08:29:30.428601 10525 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1206 08:29:30.465345 10525 ssh_runner.go:195] Run: sudo crictl images --output json
I1206 08:29:30.510374 10525 crio.go:514] all images are preloaded for cri-o runtime.
I1206 08:29:30.510400 10525 cache_images.go:86] Images are preloaded, skipping loading
I1206 08:29:30.510410 10525 kubeadm.go:935] updating node { 192.168.39.168 8443 v1.34.2 crio true true} ...
I1206 08:29:30.510524 10525 kubeadm.go:947] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-618522 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.168
[Install]
config:
{KubernetesVersion:v1.34.2 ClusterName:addons-618522 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1206 08:29:30.510608 10525 ssh_runner.go:195] Run: crio config
I1206 08:29:30.559183 10525 cni.go:84] Creating CNI manager for ""
I1206 08:29:30.559207 10525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1206 08:29:30.559223 10525 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1206 08:29:30.559258 10525 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.168 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-618522 NodeName:addons-618522 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1206 08:29:30.559387 10525 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.168
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-618522"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.168"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.168"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1206 08:29:30.559483 10525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
I1206 08:29:30.572367 10525 binaries.go:51] Found k8s binaries, skipping transfer
I1206 08:29:30.572438 10525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1206 08:29:30.585276 10525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
I1206 08:29:30.607056 10525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1206 08:29:30.628970 10525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
I1206 08:29:30.650950 10525 ssh_runner.go:195] Run: grep 192.168.39.168 control-plane.minikube.internal$ /etc/hosts
I1206 08:29:30.655582 10525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.168 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1206 08:29:30.671341 10525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1206 08:29:30.817744 10525 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1206 08:29:30.853664 10525 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522 for IP: 192.168.39.168
I1206 08:29:30.853696 10525 certs.go:195] generating shared ca certs ...
I1206 08:29:30.853720 10525 certs.go:227] acquiring lock for ca certs: {Name:mk000359972764fead2b3aaf8b843862aa35270c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 08:29:30.853911 10525 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5603/.minikube/ca.key
I1206 08:29:30.959183 10525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5603/.minikube/ca.crt ...
I1206 08:29:30.959212 10525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5603/.minikube/ca.crt: {Name:mk98d18dd8a6f9e698099692788ea182be89556f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 08:29:30.959385 10525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5603/.minikube/ca.key ...
I1206 08:29:30.959398 10525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5603/.minikube/ca.key: {Name:mk617b4143abd6eb5b699e411431f4c3518e2a8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 08:29:30.959494 10525 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5603/.minikube/proxy-client-ca.key
I1206 08:29:31.097502 10525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5603/.minikube/proxy-client-ca.crt ...
I1206 08:29:31.097532 10525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5603/.minikube/proxy-client-ca.crt: {Name:mkfc7ab92bbdf62beb6034d33cd4580952764663 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 08:29:31.097713 10525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5603/.minikube/proxy-client-ca.key ...
I1206 08:29:31.097726 10525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5603/.minikube/proxy-client-ca.key: {Name:mk4e124a8a4cadebc0035c7ad9b075cdab45993b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 08:29:31.097807 10525 certs.go:257] generating profile certs ...
I1206 08:29:31.097866 10525 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.key
I1206 08:29:31.097880 10525 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt with IP's: []
I1206 08:29:31.203120 10525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt ...
I1206 08:29:31.203150 10525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: {Name:mk3762266801ac43724b8f8cd842b85d6671b320 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 08:29:31.203310 10525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.key ...
I1206 08:29:31.203321 10525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.key: {Name:mk1236cd5229554c01f652817c35695f89a44b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 08:29:31.203389 10525 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/apiserver.key.32668d86
I1206 08:29:31.203409 10525 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/apiserver.crt.32668d86 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.168]
I1206 08:29:31.327479 10525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/apiserver.crt.32668d86 ...
I1206 08:29:31.327511 10525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/apiserver.crt.32668d86: {Name:mkd5cf6dfcde218ea513037b7edcd6f8c7a9464c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 08:29:31.327669 10525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/apiserver.key.32668d86 ...
I1206 08:29:31.327682 10525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/apiserver.key.32668d86: {Name:mkc0d2d20a6672d311a9a0fedef702fc2d832d50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 08:29:31.327753 10525 certs.go:382] copying /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/apiserver.crt.32668d86 -> /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/apiserver.crt
I1206 08:29:31.327822 10525 certs.go:386] copying /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/apiserver.key.32668d86 -> /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/apiserver.key
I1206 08:29:31.327869 10525 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/proxy-client.key
I1206 08:29:31.327887 10525 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/proxy-client.crt with IP's: []
I1206 08:29:31.419224 10525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/proxy-client.crt ...
I1206 08:29:31.419250 10525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/proxy-client.crt: {Name:mk0c484331172407ea8b520fc091cc8bce5130fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 08:29:31.419411 10525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/proxy-client.key ...
I1206 08:29:31.419422 10525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/proxy-client.key: {Name:mkca5c2903e4dbfd94c7024ef1aca11c61796e19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 08:29:31.419617 10525 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca-key.pem (1675 bytes)
I1206 08:29:31.419655 10525 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca.pem (1082 bytes)
I1206 08:29:31.419679 10525 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/cert.pem (1123 bytes)
I1206 08:29:31.419702 10525 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/key.pem (1675 bytes)
I1206 08:29:31.420227 10525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1206 08:29:31.452258 10525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1206 08:29:31.482562 10525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1206 08:29:31.517113 10525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1206 08:29:31.553291 10525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1206 08:29:31.586644 10525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1206 08:29:31.617068 10525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1206 08:29:31.647513 10525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1206 08:29:31.677388 10525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1206 08:29:31.707416 10525 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1206 08:29:31.728515 10525 ssh_runner.go:195] Run: openssl version
I1206 08:29:31.735163 10525 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1206 08:29:31.747846 10525 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1206 08:29:31.760277 10525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1206 08:29:31.766508 10525 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 6 08:29 /usr/share/ca-certificates/minikubeCA.pem
I1206 08:29:31.766576 10525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1206 08:29:31.774394 10525 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1206 08:29:31.786915 10525 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1206 08:29:31.798853 10525 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1206 08:29:31.803735 10525 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1206 08:29:31.803794 10525 kubeadm.go:401] StartCluster: {Name:addons-618522 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-618522 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1206 08:29:31.803857 10525 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1206 08:29:31.803898 10525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1206 08:29:31.840137 10525 cri.go:89] found id: ""
I1206 08:29:31.840235 10525 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1206 08:29:31.853119 10525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1206 08:29:31.865576 10525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1206 08:29:31.877569 10525 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1206 08:29:31.877587 10525 kubeadm.go:158] found existing configuration files:
I1206 08:29:31.877633 10525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1206 08:29:31.888757 10525 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1206 08:29:31.888815 10525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1206 08:29:31.900619 10525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1206 08:29:31.911651 10525 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1206 08:29:31.911711 10525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1206 08:29:31.923419 10525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1206 08:29:31.935211 10525 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1206 08:29:31.935265 10525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1206 08:29:31.948207 10525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1206 08:29:31.960235 10525 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1206 08:29:31.960286 10525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1206 08:29:31.973301 10525 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1206 08:29:32.122282 10525 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1206 08:29:43.999988 10525 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
I1206 08:29:44.000093 10525 kubeadm.go:319] [preflight] Running pre-flight checks
I1206 08:29:44.000175 10525 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1206 08:29:44.000345 10525 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1206 08:29:44.000521 10525 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1206 08:29:44.000616 10525 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1206 08:29:44.003486 10525 out.go:252] - Generating certificates and keys ...
I1206 08:29:44.003585 10525 kubeadm.go:319] [certs] Using existing ca certificate authority
I1206 08:29:44.003669 10525 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1206 08:29:44.003804 10525 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1206 08:29:44.003899 10525 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1206 08:29:44.003982 10525 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1206 08:29:44.004054 10525 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1206 08:29:44.004138 10525 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1206 08:29:44.004286 10525 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-618522 localhost] and IPs [192.168.39.168 127.0.0.1 ::1]
I1206 08:29:44.004358 10525 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1206 08:29:44.004503 10525 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-618522 localhost] and IPs [192.168.39.168 127.0.0.1 ::1]
I1206 08:29:44.004593 10525 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1206 08:29:44.004680 10525 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1206 08:29:44.004743 10525 kubeadm.go:319] [certs] Generating "sa" key and public key
I1206 08:29:44.004845 10525 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1206 08:29:44.004953 10525 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1206 08:29:44.005038 10525 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1206 08:29:44.005124 10525 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1206 08:29:44.005214 10525 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1206 08:29:44.005292 10525 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1206 08:29:44.005400 10525 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1206 08:29:44.005504 10525 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1206 08:29:44.006803 10525 out.go:252] - Booting up control plane ...
I1206 08:29:44.006890 10525 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1206 08:29:44.006975 10525 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1206 08:29:44.007058 10525 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1206 08:29:44.007171 10525 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1206 08:29:44.007291 10525 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1206 08:29:44.007418 10525 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1206 08:29:44.007520 10525 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1206 08:29:44.007561 10525 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1206 08:29:44.007667 10525 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1206 08:29:44.007771 10525 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1206 08:29:44.007826 10525 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.821404ms
I1206 08:29:44.007905 10525 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1206 08:29:44.007970 10525 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.168:8443/livez
I1206 08:29:44.008039 10525 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1206 08:29:44.008108 10525 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1206 08:29:44.008183 10525 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.836142283s
I1206 08:29:44.008254 10525 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.06492022s
I1206 08:29:44.008320 10525 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001600328s
I1206 08:29:44.008496 10525 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1206 08:29:44.008659 10525 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1206 08:29:44.008708 10525 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1206 08:29:44.008921 10525 kubeadm.go:319] [mark-control-plane] Marking the node addons-618522 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1206 08:29:44.008983 10525 kubeadm.go:319] [bootstrap-token] Using token: 2rgaqd.9q3qr2oogpfcg4aj
I1206 08:29:44.010299 10525 out.go:252] - Configuring RBAC rules ...
I1206 08:29:44.010403 10525 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1206 08:29:44.010494 10525 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1206 08:29:44.010614 10525 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1206 08:29:44.010742 10525 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1206 08:29:44.010907 10525 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1206 08:29:44.010996 10525 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1206 08:29:44.011145 10525 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1206 08:29:44.011215 10525 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1206 08:29:44.011289 10525 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1206 08:29:44.011311 10525 kubeadm.go:319]
I1206 08:29:44.011396 10525 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1206 08:29:44.011405 10525 kubeadm.go:319]
I1206 08:29:44.011520 10525 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1206 08:29:44.011530 10525 kubeadm.go:319]
I1206 08:29:44.011551 10525 kubeadm.go:319] mkdir -p $HOME/.kube
I1206 08:29:44.011600 10525 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1206 08:29:44.011648 10525 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1206 08:29:44.011654 10525 kubeadm.go:319]
I1206 08:29:44.011698 10525 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1206 08:29:44.011704 10525 kubeadm.go:319]
I1206 08:29:44.011743 10525 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1206 08:29:44.011749 10525 kubeadm.go:319]
I1206 08:29:44.011791 10525 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1206 08:29:44.011906 10525 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1206 08:29:44.011973 10525 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1206 08:29:44.011979 10525 kubeadm.go:319]
I1206 08:29:44.012060 10525 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1206 08:29:44.012130 10525 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1206 08:29:44.012139 10525 kubeadm.go:319]
I1206 08:29:44.012208 10525 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2rgaqd.9q3qr2oogpfcg4aj \
I1206 08:29:44.012303 10525 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:2d17d07b3ca8c174fceaa58ec10b5dce3bfd9799b90057e73686cf2c9f9f3441 \
I1206 08:29:44.012323 10525 kubeadm.go:319] --control-plane
I1206 08:29:44.012327 10525 kubeadm.go:319]
I1206 08:29:44.012413 10525 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1206 08:29:44.012424 10525 kubeadm.go:319]
I1206 08:29:44.012539 10525 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2rgaqd.9q3qr2oogpfcg4aj \
I1206 08:29:44.012676 10525 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:2d17d07b3ca8c174fceaa58ec10b5dce3bfd9799b90057e73686cf2c9f9f3441
I1206 08:29:44.012691 10525 cni.go:84] Creating CNI manager for ""
I1206 08:29:44.012701 10525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1206 08:29:44.014013 10525 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1206 08:29:44.015128 10525 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1206 08:29:44.028452 10525 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1206 08:29:44.055027 10525 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1206 08:29:44.055119 10525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1206 08:29:44.055150 10525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-618522 minikube.k8s.io/updated_at=2025_12_06T08_29_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=addons-618522 minikube.k8s.io/primary=true
I1206 08:29:44.117986 10525 ops.go:34] apiserver oom_adj: -16
I1206 08:29:44.191458 10525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1206 08:29:44.692166 10525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1206 08:29:45.191843 10525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1206 08:29:45.692249 10525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1206 08:29:46.191601 10525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1206 08:29:46.691740 10525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1206 08:29:47.192125 10525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1206 08:29:47.691513 10525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1206 08:29:48.192570 10525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1206 08:29:48.691713 10525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1206 08:29:48.793938 10525 kubeadm.go:1114] duration metric: took 4.738882027s to wait for elevateKubeSystemPrivileges
I1206 08:29:48.793973 10525 kubeadm.go:403] duration metric: took 16.99018465s to StartCluster
I1206 08:29:48.793995 10525 settings.go:142] acquiring lock: {Name:mk1c4376642fa0e1442961c9690dcfd3d7346ba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 08:29:48.794447 10525 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22049-5603/kubeconfig
I1206 08:29:48.795027 10525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5603/kubeconfig: {Name:mk8c42c505f5f7f0ebf46166194656af7c5589e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 08:29:48.795263 10525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1206 08:29:48.795351 10525 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1206 08:29:48.795425 10525 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1206 08:29:48.795586 10525 addons.go:70] Setting yakd=true in profile "addons-618522"
I1206 08:29:48.795588 10525 addons.go:70] Setting cloud-spanner=true in profile "addons-618522"
I1206 08:29:48.795613 10525 addons.go:239] Setting addon yakd=true in "addons-618522"
I1206 08:29:48.795628 10525 addons.go:239] Setting addon cloud-spanner=true in "addons-618522"
I1206 08:29:48.795618 10525 addons.go:70] Setting metrics-server=true in profile "addons-618522"
I1206 08:29:48.795647 10525 host.go:66] Checking if "addons-618522" exists ...
I1206 08:29:48.795655 10525 config.go:182] Loaded profile config "addons-618522": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 08:29:48.795659 10525 addons.go:239] Setting addon metrics-server=true in "addons-618522"
I1206 08:29:48.795645 10525 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-618522"
I1206 08:29:48.795687 10525 host.go:66] Checking if "addons-618522" exists ...
I1206 08:29:48.795697 10525 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-618522"
I1206 08:29:48.795697 10525 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-618522"
I1206 08:29:48.795713 10525 host.go:66] Checking if "addons-618522" exists ...
I1206 08:29:48.795730 10525 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-618522"
I1206 08:29:48.795745 10525 host.go:66] Checking if "addons-618522" exists ...
I1206 08:29:48.795751 10525 host.go:66] Checking if "addons-618522" exists ...
I1206 08:29:48.796244 10525 addons.go:70] Setting gcp-auth=true in profile "addons-618522"
I1206 08:29:48.796294 10525 mustload.go:66] Loading cluster: addons-618522
I1206 08:29:48.796337 10525 addons.go:70] Setting ingress-dns=true in profile "addons-618522"
I1206 08:29:48.796371 10525 addons.go:239] Setting addon ingress-dns=true in "addons-618522"
I1206 08:29:48.796373 10525 addons.go:70] Setting ingress=true in profile "addons-618522"
I1206 08:29:48.796405 10525 addons.go:239] Setting addon ingress=true in "addons-618522"
I1206 08:29:48.796406 10525 host.go:66] Checking if "addons-618522" exists ...
I1206 08:29:48.796434 10525 host.go:66] Checking if "addons-618522" exists ...
I1206 08:29:48.796570 10525 config.go:182] Loaded profile config "addons-618522": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 08:29:48.797202 10525 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-618522"
I1206 08:29:48.797232 10525 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-618522"
I1206 08:29:48.797260 10525 host.go:66] Checking if "addons-618522" exists ...
I1206 08:29:48.797320 10525 addons.go:70] Setting storage-provisioner=true in profile "addons-618522"
I1206 08:29:48.797341 10525 addons.go:239] Setting addon storage-provisioner=true in "addons-618522"
I1206 08:29:48.797366 10525 host.go:66] Checking if "addons-618522" exists ...
I1206 08:29:48.797508 10525 out.go:179] * Verifying Kubernetes components...
I1206 08:29:48.797598 10525 addons.go:70] Setting inspektor-gadget=true in profile "addons-618522"
I1206 08:29:48.797621 10525 addons.go:239] Setting addon inspektor-gadget=true in "addons-618522"
I1206 08:29:48.797654 10525 host.go:66] Checking if "addons-618522" exists ...
I1206 08:29:48.797715 10525 addons.go:70] Setting volcano=true in profile "addons-618522"
I1206 08:29:48.797730 10525 addons.go:239] Setting addon volcano=true in "addons-618522"
I1206 08:29:48.797751 10525 host.go:66] Checking if "addons-618522" exists ...
I1206 08:29:48.797876 10525 addons.go:70] Setting default-storageclass=true in profile "addons-618522"
I1206 08:29:48.797896 10525 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-618522"
I1206 08:29:48.797922 10525 addons.go:70] Setting volumesnapshots=true in profile "addons-618522"
I1206 08:29:48.797936 10525 addons.go:239] Setting addon volumesnapshots=true in "addons-618522"
I1206 08:29:48.797977 10525 host.go:66] Checking if "addons-618522" exists ...
I1206 08:29:48.798187 10525 addons.go:70] Setting registry=true in profile "addons-618522"
I1206 08:29:48.798209 10525 addons.go:239] Setting addon registry=true in "addons-618522"
I1206 08:29:48.798232 10525 host.go:66] Checking if "addons-618522" exists ...
I1206 08:29:48.798262 10525 addons.go:70] Setting registry-creds=true in profile "addons-618522"
I1206 08:29:48.798276 10525 addons.go:239] Setting addon registry-creds=true in "addons-618522"
I1206 08:29:48.798294 10525 host.go:66] Checking if "addons-618522" exists ...
I1206 08:29:48.798395 10525 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-618522"
I1206 08:29:48.798428 10525 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-618522"
I1206 08:29:48.799297 10525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1206 08:29:48.802916 10525 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.5
I1206 08:29:48.802916 10525 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
I1206 08:29:48.803885 10525 host.go:66] Checking if "addons-618522" exists ...
I1206 08:29:48.804408 10525 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1206 08:29:48.804411 10525 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1206 08:29:48.804414 10525 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
I1206 08:29:48.804982 10525 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1206 08:29:48.805168 10525 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
I1206 08:29:48.804457 10525 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
I1206 08:29:48.805655 10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1206 08:29:48.805986 10525 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1206 08:29:48.806002 10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1206 08:29:48.806637 10525 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
W1206 08:29:48.806961 10525 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1206 08:29:48.807295 10525 addons.go:239] Setting addon default-storageclass=true in "addons-618522"
I1206 08:29:48.807338 10525 host.go:66] Checking if "addons-618522" exists ...
I1206 08:29:48.807578 10525 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1206 08:29:48.807622 10525 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
I1206 08:29:48.807641 10525 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1206 08:29:48.807655 10525 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1206 08:29:48.807703 10525 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1206 08:29:48.808650 10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1206 08:29:48.808382 10525 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1206 08:29:48.808384 10525 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
I1206 08:29:48.809058 10525 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-618522"
I1206 08:29:48.809783 10525 host.go:66] Checking if "addons-618522" exists ...
I1206 08:29:48.809378 10525 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1206 08:29:48.809945 10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1206 08:29:48.809382 10525 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1206 08:29:48.810038 10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1206 08:29:48.810068 10525 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1206 08:29:48.810083 10525 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1206 08:29:48.809389 10525 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1206 08:29:48.810283 10525 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1206 08:29:48.811241 10525 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1206 08:29:48.811251 10525 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
I1206 08:29:48.811257 10525 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1206 08:29:48.811270 10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
I1206 08:29:48.811293 10525 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
I1206 08:29:48.811342 10525 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1206 08:29:48.811248 10525 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1206 08:29:48.811371 10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1206 08:29:48.811577 10525 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1206 08:29:48.811595 10525 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1206 08:29:48.812244 10525 out.go:179] - Using image docker.io/registry:3.0.0
I1206 08:29:48.813292 10525 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1206 08:29:48.813304 10525 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
I1206 08:29:48.813369 10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1206 08:29:48.813303 10525 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1206 08:29:48.813803 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.814551 10525 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1206 08:29:48.814576 10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1206 08:29:48.815133 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.815313 10525 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1206 08:29:48.815766 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:48.815796 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.816557 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.816951 10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
I1206 08:29:48.817607 10525 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1206 08:29:48.817688 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:48.817932 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.818251 10525 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1206 08:29:48.818587 10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
I1206 08:29:48.818368 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:48.818641 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.819347 10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
I1206 08:29:48.819529 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.820250 10525 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1206 08:29:48.820708 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:48.820736 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.821077 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.821087 10525 out.go:179] - Using image docker.io/busybox:stable
I1206 08:29:48.821679 10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
I1206 08:29:48.821985 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.822649 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.822789 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:48.822819 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.822958 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:48.822988 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.823015 10525 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1206 08:29:48.823026 10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1206 08:29:48.823185 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.823428 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.823451 10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
I1206 08:29:48.823551 10525 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1206 08:29:48.823743 10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
I1206 08:29:48.823760 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:48.823808 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.823906 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.824257 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.824339 10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
I1206 08:29:48.824607 10525 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1206 08:29:48.824638 10525 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1206 08:29:48.824851 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:48.824877 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.824931 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:48.824961 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.825029 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:48.825065 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.825198 10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
I1206 08:29:48.825246 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:48.825275 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.825325 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.825607 10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
I1206 08:29:48.825644 10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
I1206 08:29:48.825700 10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
I1206 08:29:48.825978 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.826811 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:48.826849 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.826936 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:48.826966 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.827069 10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
I1206 08:29:48.827406 10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
I1206 08:29:48.829578 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.829722 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.829982 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:48.830013 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.830082 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:48.830112 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:48.830157 10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
I1206 08:29:48.830379 10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
W1206 08:29:49.088159 10525 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47152->192.168.39.168:22: read: connection reset by peer
I1206 08:29:49.088194 10525 retry.go:31] will retry after 169.45351ms: ssh: handshake failed: read tcp 192.168.39.1:47152->192.168.39.168:22: read: connection reset by peer
W1206 08:29:49.141693 10525 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47172->192.168.39.168:22: read: connection reset by peer
I1206 08:29:49.141724 10525 retry.go:31] will retry after 264.37626ms: ssh: handshake failed: read tcp 192.168.39.1:47172->192.168.39.168:22: read: connection reset by peer
I1206 08:29:49.285513 10525 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1206 08:29:49.285575 10525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1206 08:29:49.759226 10525 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
I1206 08:29:49.759267 10525 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1206 08:29:49.807120 10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
I1206 08:29:49.835568 10525 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1206 08:29:49.835598 10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1206 08:29:49.872330 10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1206 08:29:49.883624 10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1206 08:29:49.896992 10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1206 08:29:49.958970 10525 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1206 08:29:49.959004 10525 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1206 08:29:49.964200 10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1206 08:29:50.038529 10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1206 08:29:50.042668 10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1206 08:29:50.047585 10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1206 08:29:50.048081 10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1206 08:29:50.066477 10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1206 08:29:50.186790 10525 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
I1206 08:29:50.186815 10525 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1206 08:29:50.309536 10525 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
I1206 08:29:50.309561 10525 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1206 08:29:50.514315 10525 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1206 08:29:50.514341 10525 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1206 08:29:50.515009 10525 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1206 08:29:50.515025 10525 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1206 08:29:50.746561 10525 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1206 08:29:50.746586 10525 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1206 08:29:50.886154 10525 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
I1206 08:29:50.886177 10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1206 08:29:50.925022 10525 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
I1206 08:29:50.925051 10525 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1206 08:29:50.963173 10525 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1206 08:29:50.963198 10525 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1206 08:29:50.970820 10525 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1206 08:29:50.970842 10525 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1206 08:29:51.140333 10525 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1206 08:29:51.140356 10525 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1206 08:29:51.263612 10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1206 08:29:51.268230 10525 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1206 08:29:51.268261 10525 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1206 08:29:51.296996 10525 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
I1206 08:29:51.297016 10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1206 08:29:51.484195 10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1206 08:29:51.732851 10525 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1206 08:29:51.732876 10525 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1206 08:29:51.779554 10525 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1206 08:29:51.779580 10525 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1206 08:29:51.842287 10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1206 08:29:52.078856 10525 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1206 08:29:52.078884 10525 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1206 08:29:52.105790 10525 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1206 08:29:52.105816 10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1206 08:29:52.629192 10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1206 08:29:52.640261 10525 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1206 08:29:52.640283 10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1206 08:29:52.867462 10525 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.581916645s)
I1206 08:29:52.867452 10525 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.581838874s)
I1206 08:29:52.867559 10525 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1206 08:29:52.868103 10525 node_ready.go:35] waiting up to 6m0s for node "addons-618522" to be "Ready" ...
I1206 08:29:52.881190 10525 node_ready.go:49] node "addons-618522" is "Ready"
I1206 08:29:52.881226 10525 node_ready.go:38] duration metric: took 13.099432ms for node "addons-618522" to be "Ready" ...
I1206 08:29:52.881241 10525 api_server.go:52] waiting for apiserver process to appear ...
I1206 08:29:52.881302 10525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1206 08:29:53.260520 10525 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1206 08:29:53.260547 10525 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1206 08:29:53.375634 10525 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-618522" context rescaled to 1 replicas
I1206 08:29:53.751013 10525 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1206 08:29:53.751038 10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1206 08:29:54.069729 10525 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1206 08:29:54.069750 10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1206 08:29:54.277752 10525 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1206 08:29:54.277774 10525 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1206 08:29:54.550958 10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1206 08:29:56.156333 10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (6.349175006s)
I1206 08:29:56.256152 10525 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1206 08:29:56.258960 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:56.259407 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:56.259434 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:56.259596 10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
I1206 08:29:56.664839 10525 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1206 08:29:56.826326 10525 addons.go:239] Setting addon gcp-auth=true in "addons-618522"
I1206 08:29:56.826376 10525 host.go:66] Checking if "addons-618522" exists ...
I1206 08:29:56.828233 10525 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1206 08:29:56.830118 10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:56.830476 10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
I1206 08:29:56.830499 10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
I1206 08:29:56.830694 10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
I1206 08:29:58.047720 10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.175341556s)
I1206 08:29:58.047755 10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.164103792s)
I1206 08:29:58.047767 10525 addons.go:495] Verifying addon ingress=true in "addons-618522"
I1206 08:29:58.047883 10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.150860316s)
I1206 08:29:58.047997 10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.083767565s)
I1206 08:29:58.048018 10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.009458228s)
I1206 08:29:58.048092 10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.005398552s)
I1206 08:29:58.048117 10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.000509685s)
I1206 08:29:58.048163 10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.000059127s)
I1206 08:29:58.048175 10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.981677743s)
I1206 08:29:58.048219 10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.784582438s)
I1206 08:29:58.048246 10525 addons.go:495] Verifying addon registry=true in "addons-618522"
I1206 08:29:58.048277 10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.564060056s)
I1206 08:29:58.048307 10525 addons.go:495] Verifying addon metrics-server=true in "addons-618522"
I1206 08:29:58.048365 10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.206051037s)
I1206 08:29:58.049707 10525 out.go:179] * Verifying ingress addon...
I1206 08:29:58.050261 10525 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-618522 service yakd-dashboard -n yakd-dashboard
I1206 08:29:58.050265 10525 out.go:179] * Verifying registry addon...
I1206 08:29:58.051458 10525 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1206 08:29:58.052211 10525 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1206 08:29:58.089198 10525 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1206 08:29:58.089224 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:29:58.089290 10525 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1206 08:29:58.089303 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
W1206 08:29:58.102364 10525 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I1206 08:29:58.160109 10525 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.27878297s)
I1206 08:29:58.160139 10525 api_server.go:72] duration metric: took 9.364753933s to wait for apiserver process to appear ...
I1206 08:29:58.160146 10525 api_server.go:88] waiting for apiserver healthz status ...
I1206 08:29:58.160167 10525 api_server.go:253] Checking apiserver healthz at https://192.168.39.168:8443/healthz ...
I1206 08:29:58.160187 10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.530952988s)
W1206 08:29:58.160236 10525 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1206 08:29:58.160274 10525 retry.go:31] will retry after 331.684035ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1206 08:29:58.185232 10525 api_server.go:279] https://192.168.39.168:8443/healthz returned 200:
ok
I1206 08:29:58.186291 10525 api_server.go:141] control plane version: v1.34.2
I1206 08:29:58.186317 10525 api_server.go:131] duration metric: took 26.163365ms to wait for apiserver health ...
I1206 08:29:58.186330 10525 system_pods.go:43] waiting for kube-system pods to appear ...
I1206 08:29:58.219149 10525 system_pods.go:59] 16 kube-system pods found
I1206 08:29:58.219197 10525 system_pods.go:61] "amd-gpu-device-plugin-2k5hq" [c5883664-cfdc-4af0-8f2c-6404a2eb83dd] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1206 08:29:58.219211 10525 system_pods.go:61] "coredns-66bc5c9577-7c7k7" [fb10465b-d4eb-4157-8fba-f9ecee814344] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1206 08:29:58.219221 10525 system_pods.go:61] "coredns-66bc5c9577-n5nl7" [d09b0bf4-9d8e-49d4-a96e-c0c0e841abaf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1206 08:29:58.219231 10525 system_pods.go:61] "etcd-addons-618522" [c6f9a8f5-e31d-49b3-bccd-4bcfa6772584] Running
I1206 08:29:58.219239 10525 system_pods.go:61] "kube-apiserver-addons-618522" [5cdad140-9557-499c-a8ba-9cd6abd57a66] Running
I1206 08:29:58.219246 10525 system_pods.go:61] "kube-controller-manager-addons-618522" [92e42c76-1eb2-4ba2-9888-7db8e39e1efa] Running
I1206 08:29:58.219266 10525 system_pods.go:61] "kube-ingress-dns-minikube" [96c41d37-7317-4033-b500-9fcd4e3ea24b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1206 08:29:58.219275 10525 system_pods.go:61] "kube-proxy-g62jv" [2dc778d5-5fb1-4e20-be27-75b606e19155] Running
I1206 08:29:58.219279 10525 system_pods.go:61] "kube-scheduler-addons-618522" [56dfd1ed-e4ab-4bdc-834f-02de7b30036d] Running
I1206 08:29:58.219287 10525 system_pods.go:61] "metrics-server-85b7d694d7-9tv6q" [1acee34d-7cc9-4f91-81a5-5af04cf36b68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1206 08:29:58.219298 10525 system_pods.go:61] "nvidia-device-plugin-daemonset-mgdnq" [ba7d5636-4bd4-4737-a2f4-8b93aadfc08d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1206 08:29:58.219308 10525 system_pods.go:61] "registry-6b586f9694-45g8h" [9bf3de1f-8c67-4f56-8ed4-4820b8abc96d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1206 08:29:58.219319 10525 system_pods.go:61] "registry-creds-764b6fb674-qgdbz" [597094a8-35c3-4f4c-b160-93e5d951bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1206 08:29:58.219328 10525 system_pods.go:61] "registry-proxy-nj49l" [6b459c6d-2dff-4d22-afc5-16895571af55] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1206 08:29:58.219335 10525 system_pods.go:61] "snapshot-controller-7d9fbc56b8-5scnk" [7d011425-02ab-4c8a-b267-36e33db2790d] Pending
I1206 08:29:58.219347 10525 system_pods.go:61] "storage-provisioner" [db8e1388-2d9d-4022-afb8-cd29b3ab2d3a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1206 08:29:58.219356 10525 system_pods.go:74] duration metric: took 33.018079ms to wait for pod list to return data ...
I1206 08:29:58.219372 10525 default_sa.go:34] waiting for default service account to be created ...
I1206 08:29:58.251987 10525 default_sa.go:45] found service account: "default"
I1206 08:29:58.252015 10525 default_sa.go:55] duration metric: took 32.635563ms for default service account to be created ...
I1206 08:29:58.252026 10525 system_pods.go:116] waiting for k8s-apps to be running ...
I1206 08:29:58.332539 10525 system_pods.go:86] 17 kube-system pods found
I1206 08:29:58.332580 10525 system_pods.go:89] "amd-gpu-device-plugin-2k5hq" [c5883664-cfdc-4af0-8f2c-6404a2eb83dd] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1206 08:29:58.332590 10525 system_pods.go:89] "coredns-66bc5c9577-7c7k7" [fb10465b-d4eb-4157-8fba-f9ecee814344] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1206 08:29:58.332602 10525 system_pods.go:89] "coredns-66bc5c9577-n5nl7" [d09b0bf4-9d8e-49d4-a96e-c0c0e841abaf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1206 08:29:58.332615 10525 system_pods.go:89] "etcd-addons-618522" [c6f9a8f5-e31d-49b3-bccd-4bcfa6772584] Running
I1206 08:29:58.332621 10525 system_pods.go:89] "kube-apiserver-addons-618522" [5cdad140-9557-499c-a8ba-9cd6abd57a66] Running
I1206 08:29:58.332626 10525 system_pods.go:89] "kube-controller-manager-addons-618522" [92e42c76-1eb2-4ba2-9888-7db8e39e1efa] Running
I1206 08:29:58.332638 10525 system_pods.go:89] "kube-ingress-dns-minikube" [96c41d37-7317-4033-b500-9fcd4e3ea24b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1206 08:29:58.332643 10525 system_pods.go:89] "kube-proxy-g62jv" [2dc778d5-5fb1-4e20-be27-75b606e19155] Running
I1206 08:29:58.332650 10525 system_pods.go:89] "kube-scheduler-addons-618522" [56dfd1ed-e4ab-4bdc-834f-02de7b30036d] Running
I1206 08:29:58.332658 10525 system_pods.go:89] "metrics-server-85b7d694d7-9tv6q" [1acee34d-7cc9-4f91-81a5-5af04cf36b68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1206 08:29:58.332669 10525 system_pods.go:89] "nvidia-device-plugin-daemonset-mgdnq" [ba7d5636-4bd4-4737-a2f4-8b93aadfc08d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1206 08:29:58.332678 10525 system_pods.go:89] "registry-6b586f9694-45g8h" [9bf3de1f-8c67-4f56-8ed4-4820b8abc96d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1206 08:29:58.332687 10525 system_pods.go:89] "registry-creds-764b6fb674-qgdbz" [597094a8-35c3-4f4c-b160-93e5d951bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1206 08:29:58.332694 10525 system_pods.go:89] "registry-proxy-nj49l" [6b459c6d-2dff-4d22-afc5-16895571af55] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1206 08:29:58.332703 10525 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5scnk" [7d011425-02ab-4c8a-b267-36e33db2790d] Pending
I1206 08:29:58.332709 10525 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mvfw9" [2ebcc929-b368-4571-bc60-16649c316fde] Pending
I1206 08:29:58.332720 10525 system_pods.go:89] "storage-provisioner" [db8e1388-2d9d-4022-afb8-cd29b3ab2d3a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1206 08:29:58.332729 10525 system_pods.go:126] duration metric: took 80.696697ms to wait for k8s-apps to be running ...
I1206 08:29:58.332741 10525 system_svc.go:44] waiting for kubelet service to be running ....
I1206 08:29:58.332824 10525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1206 08:29:58.492693 10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1206 08:29:58.575975 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:29:58.576046 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:29:59.070416 10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.519409848s)
I1206 08:29:59.070457 10525 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-618522"
I1206 08:29:59.070482 10525 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.242210015s)
I1206 08:29:59.070517 10525 system_svc.go:56] duration metric: took 737.770542ms WaitForService to wait for kubelet
I1206 08:29:59.070538 10525 kubeadm.go:587] duration metric: took 10.275150195s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1206 08:29:59.070664 10525 node_conditions.go:102] verifying NodePressure condition ...
I1206 08:29:59.072201 10525 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1206 08:29:59.072202 10525 out.go:179] * Verifying csi-hostpath-driver addon...
I1206 08:29:59.072722 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:29:59.074640 10525 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1206 08:29:59.075199 10525 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1206 08:29:59.076440 10525 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1206 08:29:59.076474 10525 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1206 08:29:59.101932 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:29:59.102082 10525 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1206 08:29:59.102102 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:29:59.140252 10525 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1206 08:29:59.140281 10525 node_conditions.go:123] node cpu capacity is 2
I1206 08:29:59.140295 10525 node_conditions.go:105] duration metric: took 69.622332ms to run NodePressure ...
I1206 08:29:59.140305 10525 start.go:242] waiting for startup goroutines ...
I1206 08:29:59.242846 10525 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1206 08:29:59.242865 10525 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1206 08:29:59.383332 10525 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1206 08:29:59.383350 10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1206 08:29:59.468731 10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1206 08:29:59.562774 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:29:59.564379 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:29:59.580147 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:00.059201 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:00.059536 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:00.083969 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:00.560066 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:00.561127 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:00.582650 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:00.659928 10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.167189738s)
I1206 08:30:01.052650 10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.583891498s)
I1206 08:30:01.053581 10525 addons.go:495] Verifying addon gcp-auth=true in "addons-618522"
I1206 08:30:01.054828 10525 out.go:179] * Verifying gcp-auth addon...
I1206 08:30:01.057160 10525 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1206 08:30:01.086524 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:01.086804 10525 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1206 08:30:01.086818 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:01.086827 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:01.088817 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:01.558206 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:01.558382 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:01.565198 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:01.582023 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:02.057641 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:02.057670 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:02.059402 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:02.080352 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:02.564500 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:02.566369 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:02.566497 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:02.582081 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:03.060152 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:03.061359 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:03.065734 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:03.080964 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:03.559006 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:03.561593 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:03.561891 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:03.578482 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:04.067551 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:04.069268 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:04.069767 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:04.165251 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:04.557683 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:04.557819 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:04.563740 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:04.580431 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:05.066010 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:05.068942 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:05.072715 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:05.080696 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:05.556670 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:05.556985 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:05.560801 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:05.579001 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:06.056885 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:06.057112 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:06.061614 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:06.080108 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:06.556083 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:06.556825 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:06.560070 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:06.579182 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:07.056433 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:07.056910 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:07.061391 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:07.081977 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:07.556615 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:07.556937 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:07.561129 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:07.580221 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:08.057196 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:08.059048 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:08.060312 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:08.156490 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:08.556538 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:08.557141 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:08.560767 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:08.579085 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:09.056737 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:09.057045 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:09.061408 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:09.079521 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:09.555007 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:09.557637 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:09.560193 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:09.578748 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:10.057753 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:10.057900 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:10.060701 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:10.079761 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:10.556534 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:10.558271 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:10.560842 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:10.578634 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:11.056452 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:11.059092 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:11.060441 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:11.081797 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:11.555874 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:11.561558 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:11.566972 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:11.578978 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:12.058029 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:12.058155 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:12.061481 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:12.083400 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:12.558281 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:12.560596 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:12.563102 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:12.582021 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:13.059893 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:13.059957 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:13.063159 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:13.086101 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:13.765387 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:13.765500 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:13.767923 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:13.769020 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:14.066619 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:14.068734 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:14.068882 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:14.084093 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:14.571215 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:14.575404 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:14.575507 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:14.586291 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:15.059501 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:15.059501 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:15.062832 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:15.084649 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:15.557191 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:15.557372 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:15.559947 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:15.578570 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:16.056955 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:16.057058 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:16.061339 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:16.080059 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:16.558230 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:16.558530 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:16.560864 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:16.578764 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:17.056284 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:17.056436 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:17.061674 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:17.081408 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:17.555126 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:17.557423 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:17.560604 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:17.579967 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:18.059340 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:18.060319 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:18.062010 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:18.081309 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:18.561018 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:18.563393 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:18.564011 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:18.582939 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:19.061178 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:19.061291 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:19.065501 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:19.080497 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:19.556331 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:19.559333 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:19.561416 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:19.579635 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:20.056995 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:20.059209 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:20.061964 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:20.079819 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:20.555619 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:20.557460 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:20.560029 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:20.579159 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:21.182764 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:21.183087 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:21.183842 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:21.184208 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:21.557457 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:21.557584 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:21.562236 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:21.582260 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:22.058836 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:22.064879 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:22.066212 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:22.083236 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:22.558499 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:22.558656 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:22.564222 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:22.581090 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:23.058092 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:23.062017 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:23.063953 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:23.079856 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:23.560860 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:23.566724 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:23.569904 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:23.582480 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:24.059372 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:24.062781 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:24.063652 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:24.080339 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:24.563224 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:24.563236 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:24.565307 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:24.580665 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:25.061135 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:25.065257 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:25.065302 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:25.081534 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:25.555023 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:25.557751 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:25.562773 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:25.581784 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:26.547637 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:26.550720 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:26.550808 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:26.550902 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:26.555554 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:26.558819 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:26.561129 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:26.580323 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:27.061869 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:27.062914 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:27.063109 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:27.080933 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:27.556000 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:27.557630 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:27.561553 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:27.581326 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:28.066244 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:28.066328 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:28.066668 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:28.086811 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:28.558831 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:28.559169 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:28.565892 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:28.733895 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:29.059075 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:29.061546 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:29.065025 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:29.081273 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:29.562957 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:29.563427 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:29.569361 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:29.869108 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:30.057817 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:30.058057 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:30.060560 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:30.083935 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:30.557888 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:30.559573 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:30.564152 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:30.582104 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:31.060754 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:31.061881 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:31.062146 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:31.082955 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:31.557953 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:31.558071 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:31.563014 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:31.580838 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:32.057639 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:32.060269 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:32.061676 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:32.080198 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:32.558368 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:32.566158 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:32.567534 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:32.578604 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:33.100780 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:33.100896 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:33.101166 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:33.101307 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:33.568511 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:33.568555 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:33.568853 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:33.579460 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:34.057999 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:34.058144 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:34.060496 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:34.079210 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:34.557550 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:34.558137 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:34.560145 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:34.578497 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:35.055548 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:35.057246 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:35.060722 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:35.078426 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:35.556595 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:35.557446 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:35.560440 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:35.579158 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:36.056619 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:36.058063 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:36.061578 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:36.082897 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:36.562698 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:36.563318 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:36.569429 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:36.585415 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:37.058215 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:37.062186 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:37.063895 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:37.079208 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:37.558757 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:37.558908 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:37.562116 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:37.579277 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:38.057198 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 08:30:38.057291 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:38.059247 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:38.079464 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:38.555959 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:38.556861 10525 kapi.go:107] duration metric: took 40.504646698s to wait for kubernetes.io/minikube-addons=registry ...
I1206 08:30:38.560323 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:38.579264 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:39.055386 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:39.061086 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:39.078994 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:39.556829 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:39.560573 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:39.578749 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:40.055192 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:40.064128 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:40.080296 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:40.555962 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:40.561962 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:40.581827 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:41.055856 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:41.061859 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:41.081246 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:41.558146 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:41.562511 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:41.583504 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:42.057200 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:42.064768 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:42.080174 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:42.558045 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:42.566152 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:42.585204 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:43.056251 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:43.061838 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:43.080558 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:43.640520 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:43.640906 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:43.641115 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:44.056720 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:44.060276 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:44.078637 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:44.556318 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:44.560430 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:44.579399 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:45.055113 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:45.060673 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:45.079549 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:45.559390 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:45.561847 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:45.580604 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:46.057573 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:46.063593 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:46.080393 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:46.556719 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:46.560415 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:46.583599 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:47.055231 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:47.064257 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:47.082571 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:47.554889 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:47.560655 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:47.580481 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:48.057889 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:48.061397 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:48.079509 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:48.556434 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:48.566571 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:48.583546 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:49.057119 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:49.062834 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:49.080356 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:49.555762 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:49.560646 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:49.579190 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:50.057144 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:50.064234 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:50.079443 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:50.554828 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:50.560692 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:50.579555 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:51.055943 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:51.062022 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:51.079282 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:51.556145 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:51.564624 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:51.581190 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:52.057530 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:52.062311 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:52.082636 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:52.555654 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:52.560637 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:52.579707 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:53.057181 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:53.060392 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:53.080962 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:53.559708 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:53.565993 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:53.580691 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:54.057875 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:54.062729 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:54.080162 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:54.563837 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:54.565427 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:54.583217 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:55.054675 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:55.069825 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:55.082983 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:55.557306 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:55.561424 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:55.580640 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:56.055460 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:56.060748 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:56.080212 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:56.564361 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:56.565593 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:56.579600 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:57.058806 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:57.064478 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:57.085548 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:57.556298 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:57.560522 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:57.580728 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:58.067087 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:58.067363 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:58.085258 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:58.556708 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:58.564193 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:58.582093 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:59.055395 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:59.062357 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:59.081128 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:30:59.556931 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:30:59.560489 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:30:59.579627 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:00.057933 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:00.063621 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:00.082515 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:00.555574 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:00.562058 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:00.579200 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:01.080204 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:01.080902 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:01.086276 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:01.557706 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:01.560433 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:01.580222 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:02.072131 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:02.072679 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:02.085928 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:02.574528 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:02.575237 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:02.579355 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:03.070006 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:03.070129 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:03.090947 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:03.561265 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:03.563776 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:03.581416 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:04.064599 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:04.064764 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:04.086494 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:04.567573 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:04.567573 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:04.599422 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:05.063171 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:05.066378 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:05.079388 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:05.561928 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:05.569211 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:05.583952 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:06.137782 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:06.140205 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:06.140370 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:06.561584 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:06.565256 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:06.586400 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:07.058483 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:07.068151 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:07.080699 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:07.559034 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:07.573129 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:07.579637 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:08.056155 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:08.061068 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:08.080236 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:08.560948 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:08.561916 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:08.579711 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:09.055514 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:09.060937 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:09.080000 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:09.555901 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:09.563029 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:09.582605 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:10.055376 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:10.060625 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:10.083187 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:10.557047 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:10.561853 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:10.578802 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:11.059344 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:11.063972 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:11.083976 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:11.558427 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:11.560928 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:11.578629 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:12.059358 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:12.063078 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:12.078261 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:12.558965 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:12.562606 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:12.580999 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:13.056996 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:13.061222 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:13.079097 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:13.561176 10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 08:31:13.563065 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:13.579286 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:14.055777 10525 kapi.go:107] duration metric: took 1m16.004311352s to wait for app.kubernetes.io/name=ingress-nginx ...
I1206 08:31:14.060954 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:14.079658 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:14.562725 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:14.579435 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:15.061879 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:15.079159 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:15.561885 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:15.578751 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:16.061272 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:16.078888 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:16.569267 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:16.581077 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:17.061574 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:17.083428 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:17.561634 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:17.579991 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:18.063339 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:18.080064 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:18.561562 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 08:31:18.579863 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:19.062302 10525 kapi.go:107] duration metric: took 1m18.00514191s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1206 08:31:19.064040 10525 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-618522 cluster.
I1206 08:31:19.065372 10525 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1206 08:31:19.066757 10525 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1206 08:31:19.085693 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:19.579899 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:20.080404 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:20.579101 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:21.082024 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:21.580715 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:22.079678 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:22.579528 10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 08:31:23.079330 10525 kapi.go:107] duration metric: took 1m24.004687821s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1206 08:31:23.081024 10525 out.go:179] * Enabled addons: inspektor-gadget, registry-creds, nvidia-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, amd-gpu-device-plugin, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
I1206 08:31:23.082139 10525 addons.go:530] duration metric: took 1m34.286724246s for enable addons: enabled=[inspektor-gadget registry-creds nvidia-device-plugin storage-provisioner cloud-spanner ingress-dns amd-gpu-device-plugin metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
I1206 08:31:23.082186 10525 start.go:247] waiting for cluster config update ...
I1206 08:31:23.082212 10525 start.go:256] writing updated cluster config ...
I1206 08:31:23.082623 10525 ssh_runner.go:195] Run: rm -f paused
I1206 08:31:23.090080 10525 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1206 08:31:23.094190 10525 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7c7k7" in "kube-system" namespace to be "Ready" or be gone ...
I1206 08:31:23.099658 10525 pod_ready.go:94] pod "coredns-66bc5c9577-7c7k7" is "Ready"
I1206 08:31:23.099683 10525 pod_ready.go:86] duration metric: took 5.470554ms for pod "coredns-66bc5c9577-7c7k7" in "kube-system" namespace to be "Ready" or be gone ...
I1206 08:31:23.102222 10525 pod_ready.go:83] waiting for pod "etcd-addons-618522" in "kube-system" namespace to be "Ready" or be gone ...
I1206 08:31:23.109942 10525 pod_ready.go:94] pod "etcd-addons-618522" is "Ready"
I1206 08:31:23.109975 10525 pod_ready.go:86] duration metric: took 7.728641ms for pod "etcd-addons-618522" in "kube-system" namespace to be "Ready" or be gone ...
I1206 08:31:23.114550 10525 pod_ready.go:83] waiting for pod "kube-apiserver-addons-618522" in "kube-system" namespace to be "Ready" or be gone ...
I1206 08:31:23.122329 10525 pod_ready.go:94] pod "kube-apiserver-addons-618522" is "Ready"
I1206 08:31:23.122366 10525 pod_ready.go:86] duration metric: took 7.78139ms for pod "kube-apiserver-addons-618522" in "kube-system" namespace to be "Ready" or be gone ...
I1206 08:31:23.125252 10525 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-618522" in "kube-system" namespace to be "Ready" or be gone ...
I1206 08:31:23.494708 10525 pod_ready.go:94] pod "kube-controller-manager-addons-618522" is "Ready"
I1206 08:31:23.494748 10525 pod_ready.go:86] duration metric: took 369.464687ms for pod "kube-controller-manager-addons-618522" in "kube-system" namespace to be "Ready" or be gone ...
I1206 08:31:23.694607 10525 pod_ready.go:83] waiting for pod "kube-proxy-g62jv" in "kube-system" namespace to be "Ready" or be gone ...
I1206 08:31:24.095376 10525 pod_ready.go:94] pod "kube-proxy-g62jv" is "Ready"
I1206 08:31:24.095400 10525 pod_ready.go:86] duration metric: took 400.765965ms for pod "kube-proxy-g62jv" in "kube-system" namespace to be "Ready" or be gone ...
I1206 08:31:24.295544 10525 pod_ready.go:83] waiting for pod "kube-scheduler-addons-618522" in "kube-system" namespace to be "Ready" or be gone ...
I1206 08:31:24.694310 10525 pod_ready.go:94] pod "kube-scheduler-addons-618522" is "Ready"
I1206 08:31:24.694335 10525 pod_ready.go:86] duration metric: took 398.772619ms for pod "kube-scheduler-addons-618522" in "kube-system" namespace to be "Ready" or be gone ...
I1206 08:31:24.694347 10525 pod_ready.go:40] duration metric: took 1.604236047s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1206 08:31:24.742491 10525 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
I1206 08:31:24.744417 10525 out.go:179] * Done! kubectl is now configured to use "addons-618522" cluster and "default" namespace by default
==> CRI-O <==
Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.627566109Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=42691043-31ac-448a-855e-87261df80e70 name=/runtime.v1.RuntimeService/Version
Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.629907559Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=43e8b58a-d353-4437-aee3-e66e2896f189 name=/runtime.v1.ImageService/ImageFsInfo
Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.631973167Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765010059631865564,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585495,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43e8b58a-d353-4437-aee3-e66e2896f189 name=/runtime.v1.ImageService/ImageFsInfo
Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.633644102Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a19d6237-9435-41ae-b4ea-fd5990bf04ab name=/runtime.v1.RuntimeService/ListContainers
Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.633728182Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a19d6237-9435-41ae-b4ea-fd5990bf04ab name=/runtime.v1.RuntimeService/ListContainers
Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.634181298Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:691f4d648fd2b77571c433e75c6c0aa41c5be67869b9293fe4b511e394cd4566,PodSandboxId:6b4883c8b37cf54998971cda223aee893993a0d010650a89012d0109ee21d649,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765009919032076613,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d05c5f3-11c3-43f8-871c-1feba1d97857,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a79a7075aae608e30eb69ffd592b0bb47fbbd93d6714173436f1d16378752e4,PodSandboxId:68c49695e8e2107927cc584b310aec0aed89246aa314c86ebcbf54b4eacdef46,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765009889945659194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28642f2b-ea29-4744-a69a-ca5940220bc5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:052f5654957246b5af7941d2a478138893d80c037a727f1f6813ebf93432ac17,PodSandboxId:5155eb89959d2f9bbe8e798d2c178be539eabf19d43f01f998e40778f1f2f389,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765009872709953434,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-kqfmh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0929d19-ff6d-4c68-9412-fb5b07ffdbc0,},Annotations:map[string]string{io.kubernetes.
container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d150608cd68e068f00224c1f99416559f82a3f1aeb0427ab691bff677e324b3b,PodSandboxId:d6bb7cc58913968e800f1f3fc42a4d4a40604533813a7ab72d353a44dee72a91,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258
ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765009860796441376,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-z9k7w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 00f6c593-e4cd-444f-aba7-339ba75535f7,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b561a47833358a4bf2821d95579d79a3c858664e9c1ee1d0a0623d1ba993837b,PodSandboxId:3ea76a13c4ee4ca508e855f135f24f8e86c6a4dbe6e6f53616400278740d7923,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765009853632143591,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4lxk7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 619ee1c1-b56d-499e-ab95-7258e5762c45,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83955f1142946d7799f8db0f9c9342642b9fc3c3d429f6da6bd43d36dd032a0e,PodSandboxId:e3dfd570c3797ca4ee0cb188410f6886d83dda1aa9af253c73011b2119ed8b17,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765009852188680705,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-qhj42,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 36a19c9b-df13-4ae3-ad0a-aa86540f0692,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af2310b5caf206d277b5b5f1aaecc92cb6e653b3a0d539da262cba2feb6e06f0,PodSandboxId:e78546dbd1eb53ffc0f7df71c26d0f0a7471ecf88eef4758e14aaf8940f418fa,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76
812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765009830013101540,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96c41d37-7317-4033-b500-9fcd4e3ea24b,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5565bf8b9a19301f11d244ad76fcdac348993891755a957194bc89fdd72339cb,PodSandboxId:0b4cdbdbe9bc15467f0948ec184e0c1826e7c9a234c7902b4a5baf5382e52fcf,Metadata:
&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765009808083187142,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-2k5hq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5883664-cfdc-4af0-8f2c-6404a2eb83dd,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5196916f6baf47b445b94972b7e511739075008df75b03baf2c42ddc38d8b404,PodSandboxId:50198ca0b5791251bb2c823d990754eb12
713324465bc71625fb9b49e65226f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765009798510773415,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db8e1388-2d9d-4022-afb8-cd29b3ab2d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e6332cba2bafb17156e934f13eca1d36e74d75167c2a8796e3d86e89b9ff06e,PodSandboxId:5974b2450b9eeaa2d71b23fe75374333c4725dd83dcaa0
eca69a1571742bd8ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765009790254729674,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7c7k7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb10465b-d4eb-4157-8fba-f9ecee814344,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ae256e98e16bd81ec742594553ccc43e3e85a83aea2763892cbc386f010836,PodSandboxId:3c711959e72ff170df53a1d1ef8446577d6192fa8abefe6630ecbe4b2888b63a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765009789731326334,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g62jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dc778d5-5fb1-4e20-be27-75b606e19155,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41d417bd2e46f5280d633688820c441dcb6a2fef5b1b82d8be3d18480913bbb3,PodSandboxId:f0f8dfdcd430992f1681c0955d8a15af1b28088460392e90060ca09090f8c3cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765009777277684441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-618522,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae37787e7ba11c90d5ad8259c870c576,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPo
rt\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4b72836806c7676ea45017355553dd89e24109180f8bb53dfa55d87f396a817,PodSandboxId:981203d6ff56d6294885064815cea7c44b5b3b8a82cd574aab675216ece7ce5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765009777244661165,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-618522,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48fefc1bed6c56770bb0acf517512f62,},Annotations:map[string]string{io.kubernetes.container
.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0775959bef832653f048d0e59bc08f7c21e92bb187e7962c94eb2ff697c8d00,PodSandboxId:49bf54d6e2f6456e4c6359d1bf393427631b9bb3fa712abac3d49db7109336d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765009777253642263,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-618522,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: be5217949a7eee65cb54529bc9a96202,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4e3637fc7fbd9a55214bed416a53f59f65b4efa5a8a55e1a5bf335b334a60b,PodSandboxId:5b338c173ba94a3ceedcfd8a2a0c929336fb84dc09f01ab5ce43da27e8672968,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765009777212090330,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-618522,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 814b02689101d7cfa34ab67b41e9b59d,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a19d6237-9435-41ae-b4ea-fd5990bf04ab name=/runtime.v1.RuntimeService/ListContainers
Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.637896372Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=5c82db7e-668d-405c-b137-cbc81b0c2408 name=/runtime.v1.RuntimeService/ListPodSandbox
Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.638990607Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3798c65617666d5a9f9c76f6ef2d0d3586700088ad4a5392ba0ea04a980a54af,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-q49v8,Uid:ecbf3cf2-44ce-4aec-8f0f-5b2f8ef852b6,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765010058735682053,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-q49v8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ecbf3cf2-44ce-4aec-8f0f-5b2f8ef852b6,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T08:34:18.415984282Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6b4883c8b37cf54998971cda223aee893993a0d010650a89012d0109ee21d649,Metadata:&PodSandboxMetadata{Name:nginx,Uid:1d05c5f3-11c3-43f8-871c-1feba1d97857,Namespace:default,Attempt:0,},St
ate:SANDBOX_READY,CreatedAt:1765009914533988654,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d05c5f3-11c3-43f8-871c-1feba1d97857,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T08:31:54.208236440Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:68c49695e8e2107927cc584b310aec0aed89246aa314c86ebcbf54b4eacdef46,Metadata:&PodSandboxMetadata{Name:busybox,Uid:28642f2b-ea29-4744-a69a-ca5940220bc5,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765009885680701060,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28642f2b-ea29-4744-a69a-ca5940220bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T08:31:25.350254189Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5155eb89959d2f9bbe8e7
98d2c178be539eabf19d43f01f998e40778f1f2f389,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-85d4c799dd-kqfmh,Uid:e0929d19-ff6d-4c68-9412-fb5b07ffdbc0,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765009862588653526,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-kqfmh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0929d19-ff6d-4c68-9412-fb5b07ffdbc0,pod-template-hash: 85d4c799dd,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T08:29:57.579041444Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d6bb7cc58913968e800f1f3fc42a4d4a40604533813a7ab72d353a44dee72a91,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-z9k7w,Uid:00f6c593-e4cd-444f-aba7-339ba75535f7,Namespace:ingress-nginx,Attempt:0,},St
ate:SANDBOX_NOTREADY,CreatedAt:1765009799593521841,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: b94d31a2-3ea6-424f-b117-2245a8ecfe0e,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: b94d31a2-3ea6-424f-b117-2245a8ecfe0e,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-z9k7w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 00f6c593-e4cd-444f-aba7-339ba75535f7,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T08:29:57.985525166Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3ea76a13c4ee4ca508e855f135f24f8e86c6a4dbe6e6f53616400278740d7923,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-4lxk7,Uid:619ee1c1-b56d-499e-ab95-7258e5762c45,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,Crea
tedAt:1765009798930155610,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: 180e75e6-2c89-4a4d-9552-a18b59e70f27,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: 180e75e6-2c89-4a4d-9552-a18b59e70f27,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-4lxk7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 619ee1c1-b56d-499e-ab95-7258e5762c45,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T08:29:57.864293924Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e3dfd570c3797ca4ee0cb188410f6886d83dda1aa9af253c73011b2119ed8b17,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-648f6765c9-qhj42,Uid:36a19c9b-df13-4ae3-ad0a-aa86540f0692,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:17650097964
75973827,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-qhj42,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 36a19c9b-df13-4ae3-ad0a-aa86540f0692,pod-template-hash: 648f6765c9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T08:29:55.693116419Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:50198ca0b5791251bb2c823d990754eb12713324465bc71625fb9b49e65226f5,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:db8e1388-2d9d-4022-afb8-cd29b3ab2d3a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765009796403927158,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db8e1388-2d9d-4022-afb8-cd29b3ab2d3a,},Annotations:map[string]string{kubectl
.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-06T08:29:55.933135339Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e78546dbd1eb53ffc0f7df71c26d0f0a7471ecf88eef4758e14aaf8940f418fa,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:96c41d37-7317-4033-b500-9fcd4e3ea24b,Namespace:kube-system,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1765009795153609049,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96c41d37-7317-4033-b500-9fcd4e3ea24b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[
{\"containerPort\":53,\"hostPort\":53,\"protocol\":\"UDP\"}],\"volumeMounts\":[{\"mountPath\":\"/config\",\"name\":\"minikube-ingress-dns-config-volume\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\",\"volumes\":[{\"configMap\":{\"name\":\"minikube-ingress-dns\"},\"name\":\"minikube-ingress-dns-config-volume\"}]}}\n,kubernetes.io/config.seen: 2025-12-06T08:29:54.514897358Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0b4cdbdbe9bc15467f0948ec184e0c1826e7c9a234c7902b4a5baf5382e52fcf,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-2k5hq,Uid:c5883664-cfdc-4af0-8f2c-6404a2eb83dd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765009792911271816,Labels:map[string]string{controller-revision-hash: 7f87d6fd8d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-2k5hq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5883664-cfdc-4af0-8f2c-6404a2eb83dd,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-
plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T08:29:52.544971495Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3c711959e72ff170df53a1d1ef8446577d6192fa8abefe6630ecbe4b2888b63a,Metadata:&PodSandboxMetadata{Name:kube-proxy-g62jv,Uid:2dc778d5-5fb1-4e20-be27-75b606e19155,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765009789343729602,Labels:map[string]string{controller-revision-hash: 66d5f8d6f6,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-g62jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dc778d5-5fb1-4e20-be27-75b606e19155,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T08:29:48.397922546Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5974b2450b9eeaa2d71b23fe75374333c4725dd83dcaa0eca69a1571742bd8ce,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-7c7k7,Uid:fb10465b-d4eb-4157-8
fba-f9ecee814344,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765009789278591435,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-7c7k7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb10465b-d4eb-4157-8fba-f9ecee814344,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T08:29:48.899129298Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:49bf54d6e2f6456e4c6359d1bf393427631b9bb3fa712abac3d49db7109336d0,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-618522,Uid:be5217949a7eee65cb54529bc9a96202,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765009777022619606,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-618522,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be5217949a7eee65cb54529bc9a962
02,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: be5217949a7eee65cb54529bc9a96202,kubernetes.io/config.seen: 2025-12-06T08:29:36.488516935Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:981203d6ff56d6294885064815cea7c44b5b3b8a82cd574aab675216ece7ce5d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-618522,Uid:48fefc1bed6c56770bb0acf517512f62,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765009777020998303,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-618522,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48fefc1bed6c56770bb0acf517512f62,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 48fefc1bed6c56770bb0acf517512f62,kubernetes.io/config.seen: 2025-12-06T08:29:36.488517943Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f0f8dfdcd430992f1681c0955d8a15af1b28088460392e90060ca09090f
8c3cb,Metadata:&PodSandboxMetadata{Name:etcd-addons-618522,Uid:ae37787e7ba11c90d5ad8259c870c576,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765009777016033994,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-618522,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae37787e7ba11c90d5ad8259c870c576,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.168:2379,kubernetes.io/config.hash: ae37787e7ba11c90d5ad8259c870c576,kubernetes.io/config.seen: 2025-12-06T08:29:36.488518896Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5b338c173ba94a3ceedcfd8a2a0c929336fb84dc09f01ab5ce43da27e8672968,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-618522,Uid:814b02689101d7cfa34ab67b41e9b59d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765009777012964483,Labels:map[string]string{component: kube-apiserver,io.kubernete
s.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-618522,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 814b02689101d7cfa34ab67b41e9b59d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.168:8443,kubernetes.io/config.hash: 814b02689101d7cfa34ab67b41e9b59d,kubernetes.io/config.seen: 2025-12-06T08:29:36.488512965Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5c82db7e-668d-405c-b137-cbc81b0c2408 name=/runtime.v1.RuntimeService/ListPodSandbox
Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.640681374Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0525c69-90b6-47f5-9dad-184257eb1c87 name=/runtime.v1.RuntimeService/ListContainers
Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.640765910Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0525c69-90b6-47f5-9dad-184257eb1c87 name=/runtime.v1.RuntimeService/ListContainers
Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.641921388Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:691f4d648fd2b77571c433e75c6c0aa41c5be67869b9293fe4b511e394cd4566,PodSandboxId:6b4883c8b37cf54998971cda223aee893993a0d010650a89012d0109ee21d649,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765009919032076613,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d05c5f3-11c3-43f8-871c-1feba1d97857,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a79a7075aae608e30eb69ffd592b0bb47fbbd93d6714173436f1d16378752e4,PodSandboxId:68c49695e8e2107927cc584b310aec0aed89246aa314c86ebcbf54b4eacdef46,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765009889945659194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28642f2b-ea29-4744-a69a-ca5940220bc5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:052f5654957246b5af7941d2a478138893d80c037a727f1f6813ebf93432ac17,PodSandboxId:5155eb89959d2f9bbe8e798d2c178be539eabf19d43f01f998e40778f1f2f389,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765009872709953434,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-kqfmh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0929d19-ff6d-4c68-9412-fb5b07ffdbc0,},Annotations:map[string]string{io.kubernetes.
container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d150608cd68e068f00224c1f99416559f82a3f1aeb0427ab691bff677e324b3b,PodSandboxId:d6bb7cc58913968e800f1f3fc42a4d4a40604533813a7ab72d353a44dee72a91,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258
ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765009860796441376,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-z9k7w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 00f6c593-e4cd-444f-aba7-339ba75535f7,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b561a47833358a4bf2821d95579d79a3c858664e9c1ee1d0a0623d1ba993837b,PodSandboxId:3ea76a13c4ee4ca508e855f135f24f8e86c6a4dbe6e6f53616400278740d7923,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765009853632143591,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4lxk7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 619ee1c1-b56d-499e-ab95-7258e5762c45,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83955f1142946d7799f8db0f9c9342642b9fc3c3d429f6da6bd43d36dd032a0e,PodSandboxId:e3dfd570c3797ca4ee0cb188410f6886d83dda1aa9af253c73011b2119ed8b17,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765009852188680705,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-qhj42,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 36a19c9b-df13-4ae3-ad0a-aa86540f0692,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af2310b5caf206d277b5b5f1aaecc92cb6e653b3a0d539da262cba2feb6e06f0,PodSandboxId:e78546dbd1eb53ffc0f7df71c26d0f0a7471ecf88eef4758e14aaf8940f418fa,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76
812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765009830013101540,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96c41d37-7317-4033-b500-9fcd4e3ea24b,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5565bf8b9a19301f11d244ad76fcdac348993891755a957194bc89fdd72339cb,PodSandboxId:0b4cdbdbe9bc15467f0948ec184e0c1826e7c9a234c7902b4a5baf5382e52fcf,Metadata:
&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765009808083187142,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-2k5hq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5883664-cfdc-4af0-8f2c-6404a2eb83dd,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5196916f6baf47b445b94972b7e511739075008df75b03baf2c42ddc38d8b404,PodSandboxId:50198ca0b5791251bb2c823d990754eb12
713324465bc71625fb9b49e65226f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765009798510773415,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db8e1388-2d9d-4022-afb8-cd29b3ab2d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e6332cba2bafb17156e934f13eca1d36e74d75167c2a8796e3d86e89b9ff06e,PodSandboxId:5974b2450b9eeaa2d71b23fe75374333c4725dd83dcaa0
eca69a1571742bd8ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765009790254729674,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7c7k7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb10465b-d4eb-4157-8fba-f9ecee814344,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ae256e98e16bd81ec742594553ccc43e3e85a83aea2763892cbc386f010836,PodSandboxId:3c711959e72ff170df53a1d1ef8446577d6192fa8abefe6630ecbe4b2888b63a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765009789731326334,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g62jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dc778d5-5fb1-4e20-be27-75b606e19155,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41d417bd2e46f5280d633688820c441dcb6a2fef5b1b82d8be3d18480913bbb3,PodSandboxId:f0f8dfdcd430992f1681c0955d8a15af1b28088460392e90060ca09090f8c3cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765009777277684441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-618522,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae37787e7ba11c90d5ad8259c870c576,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPo
rt\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4b72836806c7676ea45017355553dd89e24109180f8bb53dfa55d87f396a817,PodSandboxId:981203d6ff56d6294885064815cea7c44b5b3b8a82cd574aab675216ece7ce5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765009777244661165,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-618522,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48fefc1bed6c56770bb0acf517512f62,},Annotations:map[string]string{io.kubernetes.container
.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0775959bef832653f048d0e59bc08f7c21e92bb187e7962c94eb2ff697c8d00,PodSandboxId:49bf54d6e2f6456e4c6359d1bf393427631b9bb3fa712abac3d49db7109336d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765009777253642263,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-618522,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: be5217949a7eee65cb54529bc9a96202,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4e3637fc7fbd9a55214bed416a53f59f65b4efa5a8a55e1a5bf335b334a60b,PodSandboxId:5b338c173ba94a3ceedcfd8a2a0c929336fb84dc09f01ab5ce43da27e8672968,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765009777212090330,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-618522,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 814b02689101d7cfa34ab67b41e9b59d,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0525c69-90b6-47f5-9dad-184257eb1c87 name=/runtime.v1.RuntimeService/ListContainers
Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.643042276Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: ecbf3cf2-44ce-4aec-8f0f-5b2f8ef852b6,},},}" file="otel-collector/interceptors.go:62" id=3d66829b-871b-4e5c-8fcf-62e9a884aabe name=/runtime.v1.RuntimeService/ListPodSandbox
Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.643172200Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3798c65617666d5a9f9c76f6ef2d0d3586700088ad4a5392ba0ea04a980a54af,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-q49v8,Uid:ecbf3cf2-44ce-4aec-8f0f-5b2f8ef852b6,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765010058735682053,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-q49v8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ecbf3cf2-44ce-4aec-8f0f-5b2f8ef852b6,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T08:34:18.415984282Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=3d66829b-871b-4e5c-8fcf-62e9a884aabe name=/runtime.v1.RuntimeService/ListPodSandbox
Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.645163529Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:3798c65617666d5a9f9c76f6ef2d0d3586700088ad4a5392ba0ea04a980a54af,Verbose:false,}" file="otel-collector/interceptors.go:62" id=36bc79fc-7af2-46b9-a0ae-5c8b1bd33c9b name=/runtime.v1.RuntimeService/PodSandboxStatus
Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.645264640Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:3798c65617666d5a9f9c76f6ef2d0d3586700088ad4a5392ba0ea04a980a54af,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-q49v8,Uid:ecbf3cf2-44ce-4aec-8f0f-5b2f8ef852b6,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765010058735682053,Network:&PodSandboxNetworkStatus{Ip:10.244.0.33,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:&UserNamespace{Mode:NODE,Uids:[]*IDMapping{},Gids:[]*IDMapping{},},},},},Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-q49v8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ecbf3cf2-44ce-4aec-8f0f-5b2f8ef852b6,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen:
2025-12-06T08:34:18.415984282Z,kubernetes.io/config.source: api,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=36bc79fc-7af2-46b9-a0ae-5c8b1bd33c9b name=/runtime.v1.RuntimeService/PodSandboxStatus
Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.645710464Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: ecbf3cf2-44ce-4aec-8f0f-5b2f8ef852b6,},},}" file="otel-collector/interceptors.go:62" id=bcc0a477-9d10-463e-8e5f-d48618a09974 name=/runtime.v1.RuntimeService/ListContainers
Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.645854901Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bcc0a477-9d10-463e-8e5f-d48618a09974 name=/runtime.v1.RuntimeService/ListContainers
Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.645929265Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bcc0a477-9d10-463e-8e5f-d48618a09974 name=/runtime.v1.RuntimeService/ListContainers
Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.674953726Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=36793827-889b-4a36-820a-5dd20fec522d name=/runtime.v1.RuntimeService/Version
Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.675031384Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36793827-889b-4a36-820a-5dd20fec522d name=/runtime.v1.RuntimeService/Version
Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.677084009Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=99db151d-c823-428e-96e0-e9e61a22a0fd name=/runtime.v1.ImageService/ImageFsInfo
Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.678411712Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765010059678381643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585495,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99db151d-c823-428e-96e0-e9e61a22a0fd name=/runtime.v1.ImageService/ImageFsInfo
Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.679680723Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6f98e07-547c-4e36-8b49-c8917deaad5e name=/runtime.v1.RuntimeService/ListContainers
Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.679738416Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6f98e07-547c-4e36-8b49-c8917deaad5e name=/runtime.v1.RuntimeService/ListContainers
Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.680199591Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:691f4d648fd2b77571c433e75c6c0aa41c5be67869b9293fe4b511e394cd4566,PodSandboxId:6b4883c8b37cf54998971cda223aee893993a0d010650a89012d0109ee21d649,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765009919032076613,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d05c5f3-11c3-43f8-871c-1feba1d97857,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a79a7075aae608e30eb69ffd592b0bb47fbbd93d6714173436f1d16378752e4,PodSandboxId:68c49695e8e2107927cc584b310aec0aed89246aa314c86ebcbf54b4eacdef46,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765009889945659194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28642f2b-ea29-4744-a69a-ca5940220bc5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:052f5654957246b5af7941d2a478138893d80c037a727f1f6813ebf93432ac17,PodSandboxId:5155eb89959d2f9bbe8e798d2c178be539eabf19d43f01f998e40778f1f2f389,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765009872709953434,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-kqfmh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0929d19-ff6d-4c68-9412-fb5b07ffdbc0,},Annotations:map[string]string{io.kubernetes.
container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d150608cd68e068f00224c1f99416559f82a3f1aeb0427ab691bff677e324b3b,PodSandboxId:d6bb7cc58913968e800f1f3fc42a4d4a40604533813a7ab72d353a44dee72a91,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258
ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765009860796441376,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-z9k7w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 00f6c593-e4cd-444f-aba7-339ba75535f7,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b561a47833358a4bf2821d95579d79a3c858664e9c1ee1d0a0623d1ba993837b,PodSandboxId:3ea76a13c4ee4ca508e855f135f24f8e86c6a4dbe6e6f53616400278740d7923,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765009853632143591,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4lxk7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 619ee1c1-b56d-499e-ab95-7258e5762c45,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83955f1142946d7799f8db0f9c9342642b9fc3c3d429f6da6bd43d36dd032a0e,PodSandboxId:e3dfd570c3797ca4ee0cb188410f6886d83dda1aa9af253c73011b2119ed8b17,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765009852188680705,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-qhj42,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 36a19c9b-df13-4ae3-ad0a-aa86540f0692,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af2310b5caf206d277b5b5f1aaecc92cb6e653b3a0d539da262cba2feb6e06f0,PodSandboxId:e78546dbd1eb53ffc0f7df71c26d0f0a7471ecf88eef4758e14aaf8940f418fa,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76
812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765009830013101540,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96c41d37-7317-4033-b500-9fcd4e3ea24b,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5565bf8b9a19301f11d244ad76fcdac348993891755a957194bc89fdd72339cb,PodSandboxId:0b4cdbdbe9bc15467f0948ec184e0c1826e7c9a234c7902b4a5baf5382e52fcf,Metadata:
&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765009808083187142,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-2k5hq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5883664-cfdc-4af0-8f2c-6404a2eb83dd,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5196916f6baf47b445b94972b7e511739075008df75b03baf2c42ddc38d8b404,PodSandboxId:50198ca0b5791251bb2c823d990754eb12
713324465bc71625fb9b49e65226f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765009798510773415,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db8e1388-2d9d-4022-afb8-cd29b3ab2d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e6332cba2bafb17156e934f13eca1d36e74d75167c2a8796e3d86e89b9ff06e,PodSandboxId:5974b2450b9eeaa2d71b23fe75374333c4725dd83dcaa0
eca69a1571742bd8ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765009790254729674,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7c7k7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb10465b-d4eb-4157-8fba-f9ecee814344,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ae256e98e16bd81ec742594553ccc43e3e85a83aea2763892cbc386f010836,PodSandboxId:3c711959e72ff170df53a1d1ef8446577d6192fa8abefe6630ecbe4b2888b63a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765009789731326334,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g62jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dc778d5-5fb1-4e20-be27-75b606e19155,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41d417bd2e46f5280d633688820c441dcb6a2fef5b1b82d8be3d18480913bbb3,PodSandboxId:f0f8dfdcd430992f1681c0955d8a15af1b28088460392e90060ca09090f8c3cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765009777277684441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-618522,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae37787e7ba11c90d5ad8259c870c576,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPo
rt\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4b72836806c7676ea45017355553dd89e24109180f8bb53dfa55d87f396a817,PodSandboxId:981203d6ff56d6294885064815cea7c44b5b3b8a82cd574aab675216ece7ce5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765009777244661165,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-618522,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48fefc1bed6c56770bb0acf517512f62,},Annotations:map[string]string{io.kubernetes.container
.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0775959bef832653f048d0e59bc08f7c21e92bb187e7962c94eb2ff697c8d00,PodSandboxId:49bf54d6e2f6456e4c6359d1bf393427631b9bb3fa712abac3d49db7109336d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765009777253642263,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-618522,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: be5217949a7eee65cb54529bc9a96202,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4e3637fc7fbd9a55214bed416a53f59f65b4efa5a8a55e1a5bf335b334a60b,PodSandboxId:5b338c173ba94a3ceedcfd8a2a0c929336fb84dc09f01ab5ce43da27e8672968,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765009777212090330,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-618522,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 814b02689101d7cfa34ab67b41e9b59d,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b6f98e07-547c-4e36-8b49-c8917deaad5e name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
691f4d648fd2b docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 2 minutes ago Running nginx 0 6b4883c8b37cf nginx default
3a79a7075aae6 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 2 minutes ago Running busybox 0 68c49695e8e21 busybox default
052f565495724 registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad 3 minutes ago Running controller 0 5155eb89959d2 ingress-nginx-controller-85d4c799dd-kqfmh ingress-nginx
d150608cd68e0 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285 3 minutes ago Exited patch 0 d6bb7cc589139 ingress-nginx-admission-patch-z9k7w ingress-nginx
b561a47833358 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285 3 minutes ago Exited create 0 3ea76a13c4ee4 ingress-nginx-admission-create-4lxk7 ingress-nginx
83955f1142946 docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef 3 minutes ago Running local-path-provisioner 0 e3dfd570c3797 local-path-provisioner-648f6765c9-qhj42 local-path-storage
af2310b5caf20 docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 3 minutes ago Running minikube-ingress-dns 0 e78546dbd1eb5 kube-ingress-dns-minikube kube-system
5565bf8b9a193 docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 0b4cdbdbe9bc1 amd-gpu-device-plugin-2k5hq kube-system
5196916f6baf4 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 50198ca0b5791 storage-provisioner kube-system
8e6332cba2baf 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969 4 minutes ago Running coredns 0 5974b2450b9ee coredns-66bc5c9577-7c7k7 kube-system
34ae256e98e16 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45 4 minutes ago Running kube-proxy 0 3c711959e72ff kube-proxy-g62jv kube-system
41d417bd2e46f a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1 4 minutes ago Running etcd 0 f0f8dfdcd4309 etcd-addons-618522 kube-system
e0775959bef83 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8 4 minutes ago Running kube-controller-manager 0 49bf54d6e2f64 kube-controller-manager-addons-618522 kube-system
f4b72836806c7 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952 4 minutes ago Running kube-scheduler 0 981203d6ff56d kube-scheduler-addons-618522 kube-system
ad4e3637fc7fb a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85 4 minutes ago Running kube-apiserver 0 5b338c173ba94 kube-apiserver-addons-618522 kube-system
==> coredns [8e6332cba2bafb17156e934f13eca1d36e74d75167c2a8796e3d86e89b9ff06e] <==
[INFO] 10.244.0.8:41605 - 62879 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000145374s
[INFO] 10.244.0.8:41605 - 60079 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000265763s
[INFO] 10.244.0.8:41605 - 13447 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000612414s
[INFO] 10.244.0.8:41605 - 52694 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000115545s
[INFO] 10.244.0.8:41605 - 50711 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00011907s
[INFO] 10.244.0.8:41605 - 13916 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000118279s
[INFO] 10.244.0.8:41605 - 33962 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000319268s
[INFO] 10.244.0.8:36189 - 38398 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000194412s
[INFO] 10.244.0.8:36189 - 38104 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000179395s
[INFO] 10.244.0.8:35053 - 18874 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000097769s
[INFO] 10.244.0.8:35053 - 18578 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000321563s
[INFO] 10.244.0.8:41902 - 25698 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000123749s
[INFO] 10.244.0.8:41902 - 25464 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000208565s
[INFO] 10.244.0.8:57983 - 20478 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000113851s
[INFO] 10.244.0.8:57983 - 20029 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000332087s
[INFO] 10.244.0.23:60166 - 36512 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000452489s
[INFO] 10.244.0.23:41738 - 39045 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000420528s
[INFO] 10.244.0.23:46380 - 63929 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000151944s
[INFO] 10.244.0.23:53475 - 47117 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000075569s
[INFO] 10.244.0.23:46177 - 26486 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000406063s
[INFO] 10.244.0.23:52294 - 9288 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000463933s
[INFO] 10.244.0.23:48882 - 12403 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001689352s
[INFO] 10.244.0.23:57778 - 48105 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003651804s
[INFO] 10.244.0.27:37196 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000822542s
[INFO] 10.244.0.27:35920 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000348083s
==> describe nodes <==
Name: addons-618522
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-618522
kubernetes.io/os=linux
minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
minikube.k8s.io/name=addons-618522
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_06T08_29_44_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-618522
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 06 Dec 2025 08:29:40 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-618522
AcquireTime: <unset>
RenewTime: Sat, 06 Dec 2025 08:34:19 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 06 Dec 2025 08:32:46 +0000 Sat, 06 Dec 2025 08:29:38 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 06 Dec 2025 08:32:46 +0000 Sat, 06 Dec 2025 08:29:38 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 06 Dec 2025 08:32:46 +0000 Sat, 06 Dec 2025 08:29:38 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 06 Dec 2025 08:32:46 +0000 Sat, 06 Dec 2025 08:29:44 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.168
Hostname: addons-618522
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001784Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001784Ki
pods: 110
System Info:
Machine ID: 57f399ccdddf4d4fb1dfb1180b83c0f4
System UUID: 57f399cc-dddf-4d4f-b1df-b1180b83c0f4
Boot ID: b0ebd717-1090-4118-8f89-05a31099270d
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.34.2
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (14 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m54s
default hello-world-app-5d498dc89-q49v8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 1s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m25s
ingress-nginx ingress-nginx-controller-85d4c799dd-kqfmh 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m22s
kube-system amd-gpu-device-plugin-2k5hq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m27s
kube-system coredns-66bc5c9577-7c7k7 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m31s
kube-system etcd-addons-618522 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m36s
kube-system kube-apiserver-addons-618522 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m36s
kube-system kube-controller-manager-addons-618522 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m36s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m25s
kube-system kube-proxy-g62jv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m31s
kube-system kube-scheduler-addons-618522 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m36s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m24s
local-path-storage local-path-provisioner-648f6765c9-qhj42 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m24s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m28s kube-proxy
Normal Starting 4m43s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m43s (x8 over 4m43s) kubelet Node addons-618522 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m43s (x8 over 4m43s) kubelet Node addons-618522 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m43s (x7 over 4m43s) kubelet Node addons-618522 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m43s kubelet Updated Node Allocatable limit across pods
Normal Starting 4m36s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m36s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m36s kubelet Node addons-618522 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m36s kubelet Node addons-618522 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m36s kubelet Node addons-618522 status is now: NodeHasSufficientPID
Normal NodeReady 4m35s kubelet Node addons-618522 status is now: NodeReady
Normal RegisteredNode 4m32s node-controller Node addons-618522 event: Registered Node addons-618522 in Controller
==> dmesg <==
[Dec 6 08:30] kauditd_printk_skb: 356 callbacks suppressed
[ +4.671273] kauditd_printk_skb: 326 callbacks suppressed
[ +7.214528] kauditd_printk_skb: 5 callbacks suppressed
[ +8.519940] kauditd_printk_skb: 32 callbacks suppressed
[ +5.241905] kauditd_printk_skb: 26 callbacks suppressed
[ +5.978909] kauditd_printk_skb: 11 callbacks suppressed
[ +5.377501] kauditd_printk_skb: 11 callbacks suppressed
[ +6.493161] kauditd_printk_skb: 116 callbacks suppressed
[Dec 6 08:31] kauditd_printk_skb: 61 callbacks suppressed
[ +0.292820] kauditd_printk_skb: 205 callbacks suppressed
[ +6.677761] kauditd_printk_skb: 31 callbacks suppressed
[ +5.591303] kauditd_printk_skb: 32 callbacks suppressed
[ +0.000065] kauditd_printk_skb: 41 callbacks suppressed
[ +14.992772] kauditd_printk_skb: 53 callbacks suppressed
[ +6.097038] kauditd_printk_skb: 22 callbacks suppressed
[ +6.036442] kauditd_printk_skb: 38 callbacks suppressed
[ +0.000070] kauditd_printk_skb: 93 callbacks suppressed
[Dec 6 08:32] kauditd_printk_skb: 119 callbacks suppressed
[ +3.116849] kauditd_printk_skb: 101 callbacks suppressed
[ +2.821228] kauditd_printk_skb: 113 callbacks suppressed
[ +1.795502] kauditd_printk_skb: 112 callbacks suppressed
[ +12.200056] kauditd_printk_skb: 25 callbacks suppressed
[ +0.000378] kauditd_printk_skb: 10 callbacks suppressed
[ +6.843624] kauditd_printk_skb: 41 callbacks suppressed
[Dec 6 08:34] kauditd_printk_skb: 127 callbacks suppressed
==> etcd [41d417bd2e46f5280d633688820c441dcb6a2fef5b1b82d8be3d18480913bbb3] <==
{"level":"info","ts":"2025-12-06T08:30:47.816003Z","caller":"traceutil/trace.go:172","msg":"trace[461106243] transaction","detail":"{read_only:false; response_revision:1027; number_of_response:1; }","duration":"149.640937ms","start":"2025-12-06T08:30:47.666341Z","end":"2025-12-06T08:30:47.815982Z","steps":["trace[461106243] 'process raft request' (duration: 149.51733ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-06T08:31:06.123681Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.928678ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
{"level":"info","ts":"2025-12-06T08:31:06.123739Z","caller":"traceutil/trace.go:172","msg":"trace[1435643251] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1148; }","duration":"128.012788ms","start":"2025-12-06T08:31:05.995714Z","end":"2025-12-06T08:31:06.123727Z","steps":["trace[1435643251] 'range keys from in-memory index tree' (duration: 127.831592ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-06T08:31:06.123905Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"344.541239ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingressclasses\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-06T08:31:06.123926Z","caller":"traceutil/trace.go:172","msg":"trace[448651121] range","detail":"{range_begin:/registry/ingressclasses; range_end:; response_count:0; response_revision:1148; }","duration":"345.687706ms","start":"2025-12-06T08:31:05.778232Z","end":"2025-12-06T08:31:06.123920Z","steps":["trace[448651121] 'range keys from in-memory index tree' (duration: 344.501427ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-06T08:31:06.124653Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T08:31:05.778217Z","time spent":"345.721807ms","remote":"127.0.0.1:33388","response type":"/etcdserverpb.KV/Range","request count":0,"request size":28,"response count":0,"response size":29,"request content":"key:\"/registry/ingressclasses\" limit:1 "}
{"level":"warn","ts":"2025-12-06T08:31:06.126743Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"205.787325ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
{"level":"info","ts":"2025-12-06T08:31:06.127911Z","caller":"traceutil/trace.go:172","msg":"trace[1474492496] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1148; }","duration":"206.950144ms","start":"2025-12-06T08:31:05.920943Z","end":"2025-12-06T08:31:06.127894Z","steps":["trace[1474492496] 'range keys from in-memory index tree' (duration: 205.206879ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-06T08:31:06.125931Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"353.691877ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-ptprl\" limit:1 ","response":"range_response_count:1 size:4045"}
{"level":"info","ts":"2025-12-06T08:31:06.128563Z","caller":"traceutil/trace.go:172","msg":"trace[217878058] range","detail":"{range_begin:/registry/pods/gcp-auth/gcp-auth-certs-patch-ptprl; range_end:; response_count:1; response_revision:1148; }","duration":"356.326034ms","start":"2025-12-06T08:31:05.772225Z","end":"2025-12-06T08:31:06.128551Z","steps":["trace[217878058] 'range keys from in-memory index tree' (duration: 348.898398ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-06T08:31:06.129182Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T08:31:05.772209Z","time spent":"356.669567ms","remote":"127.0.0.1:33166","response type":"/etcdserverpb.KV/Range","request count":0,"request size":54,"response count":1,"response size":4069,"request content":"key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-ptprl\" limit:1 "}
{"level":"info","ts":"2025-12-06T08:31:11.502610Z","caller":"traceutil/trace.go:172","msg":"trace[1005952380] transaction","detail":"{read_only:false; response_revision:1162; number_of_response:1; }","duration":"103.268416ms","start":"2025-12-06T08:31:11.399327Z","end":"2025-12-06T08:31:11.502595Z","steps":["trace[1005952380] 'process raft request' (duration: 103.055405ms)"],"step_count":1}
{"level":"info","ts":"2025-12-06T08:31:12.524876Z","caller":"traceutil/trace.go:172","msg":"trace[1370422975] linearizableReadLoop","detail":"{readStateIndex:1193; appliedIndex:1193; }","duration":"186.965546ms","start":"2025-12-06T08:31:12.337892Z","end":"2025-12-06T08:31:12.524858Z","steps":["trace[1370422975] 'read index received' (duration: 186.956669ms)","trace[1370422975] 'applied index is now lower than readState.Index' (duration: 7.392µs)"],"step_count":2}
{"level":"info","ts":"2025-12-06T08:31:12.525015Z","caller":"traceutil/trace.go:172","msg":"trace[1889658527] transaction","detail":"{read_only:false; response_revision:1163; number_of_response:1; }","duration":"294.091785ms","start":"2025-12-06T08:31:12.230906Z","end":"2025-12-06T08:31:12.524998Z","steps":["trace[1889658527] 'process raft request' (duration: 293.971793ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-06T08:31:12.526087Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"188.290224ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.168\" limit:1 ","response":"range_response_count:1 size:135"}
{"level":"info","ts":"2025-12-06T08:31:12.526224Z","caller":"traceutil/trace.go:172","msg":"trace[698991661] range","detail":"{range_begin:/registry/masterleases/192.168.39.168; range_end:; response_count:1; response_revision:1163; }","duration":"188.437754ms","start":"2025-12-06T08:31:12.337777Z","end":"2025-12-06T08:31:12.526215Z","steps":["trace[698991661] 'agreement among raft nodes before linearized reading' (duration: 187.355379ms)"],"step_count":1}
{"level":"info","ts":"2025-12-06T08:31:53.014153Z","caller":"traceutil/trace.go:172","msg":"trace[964558954] transaction","detail":"{read_only:false; response_revision:1377; number_of_response:1; }","duration":"229.271606ms","start":"2025-12-06T08:31:52.784866Z","end":"2025-12-06T08:31:53.014137Z","steps":["trace[964558954] 'process raft request' (duration: 229.17549ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-06T08:31:53.266687Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.218928ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-06T08:31:53.266760Z","caller":"traceutil/trace.go:172","msg":"trace[905094565] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:0; response_revision:1379; }","duration":"119.329759ms","start":"2025-12-06T08:31:53.147419Z","end":"2025-12-06T08:31:53.266749Z","steps":["trace[905094565] 'agreement among raft nodes before linearized reading' (duration: 39.29188ms)","trace[905094565] 'range keys from in-memory index tree' (duration: 79.903077ms)"],"step_count":2}
{"level":"warn","ts":"2025-12-06T08:31:53.267483Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.729148ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-mgdnq\" limit:1 ","response":"range_response_count:1 size:4478"}
{"level":"info","ts":"2025-12-06T08:31:53.267512Z","caller":"traceutil/trace.go:172","msg":"trace[358248697] range","detail":"{range_begin:/registry/pods/kube-system/nvidia-device-plugin-daemonset-mgdnq; range_end:; response_count:1; response_revision:1380; }","duration":"112.7655ms","start":"2025-12-06T08:31:53.154739Z","end":"2025-12-06T08:31:53.267505Z","steps":["trace[358248697] 'agreement among raft nodes before linearized reading' (duration: 111.933915ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-06T08:31:53.268140Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.965964ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/kube-system/nvidia-device-plugin-daemonset-9884d469d\" limit:1 ","response":"range_response_count:1 size:2898"}
{"level":"info","ts":"2025-12-06T08:31:53.268164Z","caller":"traceutil/trace.go:172","msg":"trace[2030691538] range","detail":"{range_begin:/registry/controllerrevisions/kube-system/nvidia-device-plugin-daemonset-9884d469d; range_end:; response_count:1; response_revision:1380; }","duration":"114.995569ms","start":"2025-12-06T08:31:53.153163Z","end":"2025-12-06T08:31:53.268159Z","steps":["trace[2030691538] 'agreement among raft nodes before linearized reading' (duration: 113.845207ms)"],"step_count":1}
{"level":"info","ts":"2025-12-06T08:31:53.267768Z","caller":"traceutil/trace.go:172","msg":"trace[1567190529] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1380; }","duration":"126.591674ms","start":"2025-12-06T08:31:53.141169Z","end":"2025-12-06T08:31:53.267760Z","steps":["trace[1567190529] 'process raft request' (duration: 45.575157ms)","trace[1567190529] 'compare' (duration: 79.749235ms)"],"step_count":2}
{"level":"info","ts":"2025-12-06T08:32:25.434353Z","caller":"traceutil/trace.go:172","msg":"trace[1383362334] transaction","detail":"{read_only:false; response_revision:1686; number_of_response:1; }","duration":"154.90819ms","start":"2025-12-06T08:32:25.279262Z","end":"2025-12-06T08:32:25.434171Z","steps":["trace[1383362334] 'process raft request' (duration: 154.729275ms)"],"step_count":1}
==> kernel <==
08:34:20 up 5 min, 0 users, load average: 0.77, 1.31, 0.66
Linux addons-618522 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 4 13:30:13 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [ad4e3637fc7fbd9a55214bed416a53f59f65b4efa5a8a55e1a5bf335b334a60b] <==
Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
E1206 08:30:35.293564 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.97.33:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.97.33:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.97.33:443: connect: connection refused" logger="UnhandledError"
E1206 08:30:35.297776 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.97.33:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.97.33:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.97.33:443: connect: connection refused" logger="UnhandledError"
I1206 08:30:35.367172 1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E1206 08:31:37.548687 1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:60400: use of closed network connection
E1206 08:31:37.750344 1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:60438: use of closed network connection
I1206 08:31:47.113884 1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.119.18"}
I1206 08:31:54.059525 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
I1206 08:31:54.245867 1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.194.221"}
I1206 08:32:32.662773 1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I1206 08:32:36.314981 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I1206 08:33:00.938069 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1206 08:33:00.938144 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1206 08:33:00.982692 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1206 08:33:00.982980 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1206 08:33:01.011185 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1206 08:33:01.011248 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1206 08:33:01.043125 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1206 08:33:01.044294 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W1206 08:33:01.990748 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W1206 08:33:02.045849 1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
W1206 08:33:02.179714 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
I1206 08:34:18.485578 1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.103.133"}
==> kube-controller-manager [e0775959bef832653f048d0e59bc08f7c21e92bb187e7962c94eb2ff697c8d00] <==
E1206 08:33:09.923515 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1206 08:33:12.444389 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1206 08:33:12.445642 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1206 08:33:16.269063 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1206 08:33:16.270387 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1206 08:33:16.944164 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1206 08:33:16.945317 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
I1206 08:33:17.752146 1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
I1206 08:33:17.752264 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1206 08:33:17.812927 1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
I1206 08:33:17.812991 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
E1206 08:33:22.206839 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1206 08:33:22.207878 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1206 08:33:29.557846 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1206 08:33:29.559242 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1206 08:33:37.862386 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1206 08:33:37.863580 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1206 08:33:45.309875 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1206 08:33:45.311250 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1206 08:34:15.746241 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1206 08:34:15.747262 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1206 08:34:16.369034 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1206 08:34:16.370355 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1206 08:34:18.319309 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1206 08:34:18.320896 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
==> kube-proxy [34ae256e98e16bd81ec742594553ccc43e3e85a83aea2763892cbc386f010836] <==
I1206 08:29:50.860737 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1206 08:29:50.963197 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1206 08:29:50.964491 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.168"]
E1206 08:29:50.964593 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1206 08:29:51.106941 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1206 08:29:51.107978 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1206 08:29:51.109002 1 server_linux.go:132] "Using iptables Proxier"
I1206 08:29:51.162131 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1206 08:29:51.163229 1 server.go:527] "Version info" version="v1.34.2"
I1206 08:29:51.163261 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1206 08:29:51.172561 1 config.go:200] "Starting service config controller"
I1206 08:29:51.172596 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1206 08:29:51.172611 1 config.go:106] "Starting endpoint slice config controller"
I1206 08:29:51.172614 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1206 08:29:51.172622 1 config.go:403] "Starting serviceCIDR config controller"
I1206 08:29:51.172625 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1206 08:29:51.176024 1 config.go:309] "Starting node config controller"
I1206 08:29:51.176123 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1206 08:29:51.176130 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1206 08:29:51.273384 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1206 08:29:51.273429 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1206 08:29:51.273450 1 shared_informer.go:356] "Caches are synced" controller="service config"
==> kube-scheduler [f4b72836806c7676ea45017355553dd89e24109180f8bb53dfa55d87f396a817] <==
E1206 08:29:40.632648 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1206 08:29:40.632726 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1206 08:29:40.632866 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1206 08:29:40.632959 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1206 08:29:40.632985 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1206 08:29:40.633018 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1206 08:29:40.633042 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1206 08:29:40.633075 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1206 08:29:40.633100 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1206 08:29:40.633130 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1206 08:29:40.633226 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1206 08:29:40.633254 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1206 08:29:40.633621 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1206 08:29:41.443380 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1206 08:29:41.508904 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1206 08:29:41.536483 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1206 08:29:41.664242 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1206 08:29:41.671308 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1206 08:29:41.689080 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1206 08:29:41.733670 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1206 08:29:41.796837 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1206 08:29:41.830084 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1206 08:29:41.888351 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1206 08:29:42.091981 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
I1206 08:29:43.821965 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Dec 06 08:33:04 addons-618522 kubelet[1500]: I1206 08:33:04.067833 1500 scope.go:117] "RemoveContainer" containerID="d88d1d6bd8a47c4ebd59f78086729cc86d8a34a5787cae7136274a0e8df3ccd4"
Dec 06 08:33:04 addons-618522 kubelet[1500]: I1206 08:33:04.068650 1500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d88d1d6bd8a47c4ebd59f78086729cc86d8a34a5787cae7136274a0e8df3ccd4"} err="failed to get container status \"d88d1d6bd8a47c4ebd59f78086729cc86d8a34a5787cae7136274a0e8df3ccd4\": rpc error: code = NotFound desc = could not find container \"d88d1d6bd8a47c4ebd59f78086729cc86d8a34a5787cae7136274a0e8df3ccd4\": container with ID starting with d88d1d6bd8a47c4ebd59f78086729cc86d8a34a5787cae7136274a0e8df3ccd4 not found: ID does not exist"
Dec 06 08:33:04 addons-618522 kubelet[1500]: I1206 08:33:04.068667 1500 scope.go:117] "RemoveContainer" containerID="ca7aafe138914350d006b3f471220a8628f9bcb9a052f4c553e386742eb22fdb"
Dec 06 08:33:04 addons-618522 kubelet[1500]: I1206 08:33:04.069929 1500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca7aafe138914350d006b3f471220a8628f9bcb9a052f4c553e386742eb22fdb"} err="failed to get container status \"ca7aafe138914350d006b3f471220a8628f9bcb9a052f4c553e386742eb22fdb\": rpc error: code = NotFound desc = could not find container \"ca7aafe138914350d006b3f471220a8628f9bcb9a052f4c553e386742eb22fdb\": container with ID starting with ca7aafe138914350d006b3f471220a8628f9bcb9a052f4c553e386742eb22fdb not found: ID does not exist"
Dec 06 08:33:04 addons-618522 kubelet[1500]: I1206 08:33:04.069947 1500 scope.go:117] "RemoveContainer" containerID="f6053f43b5805d6e017644cd8d7e735cd9c42b7d69a9539e222dd389a1b68563"
Dec 06 08:33:04 addons-618522 kubelet[1500]: I1206 08:33:04.189338 1500 scope.go:117] "RemoveContainer" containerID="f6053f43b5805d6e017644cd8d7e735cd9c42b7d69a9539e222dd389a1b68563"
Dec 06 08:33:04 addons-618522 kubelet[1500]: E1206 08:33:04.190097 1500 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6053f43b5805d6e017644cd8d7e735cd9c42b7d69a9539e222dd389a1b68563\": container with ID starting with f6053f43b5805d6e017644cd8d7e735cd9c42b7d69a9539e222dd389a1b68563 not found: ID does not exist" containerID="f6053f43b5805d6e017644cd8d7e735cd9c42b7d69a9539e222dd389a1b68563"
Dec 06 08:33:04 addons-618522 kubelet[1500]: I1206 08:33:04.190147 1500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6053f43b5805d6e017644cd8d7e735cd9c42b7d69a9539e222dd389a1b68563"} err="failed to get container status \"f6053f43b5805d6e017644cd8d7e735cd9c42b7d69a9539e222dd389a1b68563\": rpc error: code = NotFound desc = could not find container \"f6053f43b5805d6e017644cd8d7e735cd9c42b7d69a9539e222dd389a1b68563\": container with ID starting with f6053f43b5805d6e017644cd8d7e735cd9c42b7d69a9539e222dd389a1b68563 not found: ID does not exist"
Dec 06 08:33:13 addons-618522 kubelet[1500]: E1206 08:33:13.772311 1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765009993771732090 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 06 08:33:13 addons-618522 kubelet[1500]: E1206 08:33:13.772424 1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765009993771732090 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 06 08:33:23 addons-618522 kubelet[1500]: E1206 08:33:23.776610 1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765010003776084977 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 06 08:33:23 addons-618522 kubelet[1500]: E1206 08:33:23.777011 1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765010003776084977 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 06 08:33:26 addons-618522 kubelet[1500]: I1206 08:33:26.390663 1500 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-2k5hq" secret="" err="secret \"gcp-auth\" not found"
Dec 06 08:33:33 addons-618522 kubelet[1500]: E1206 08:33:33.780044 1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765010013779460154 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 06 08:33:33 addons-618522 kubelet[1500]: E1206 08:33:33.780093 1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765010013779460154 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 06 08:33:43 addons-618522 kubelet[1500]: E1206 08:33:43.783551 1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765010023783114510 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 06 08:33:43 addons-618522 kubelet[1500]: E1206 08:33:43.783579 1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765010023783114510 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 06 08:33:53 addons-618522 kubelet[1500]: E1206 08:33:53.788002 1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765010033787409988 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 06 08:33:53 addons-618522 kubelet[1500]: E1206 08:33:53.788052 1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765010033787409988 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 06 08:34:03 addons-618522 kubelet[1500]: E1206 08:34:03.792924 1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765010043791491294 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 06 08:34:03 addons-618522 kubelet[1500]: E1206 08:34:03.792975 1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765010043791491294 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 06 08:34:13 addons-618522 kubelet[1500]: E1206 08:34:13.795758 1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765010053795279611 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 06 08:34:13 addons-618522 kubelet[1500]: E1206 08:34:13.796262 1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765010053795279611 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 06 08:34:16 addons-618522 kubelet[1500]: I1206 08:34:16.390965 1500 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Dec 06 08:34:18 addons-618522 kubelet[1500]: I1206 08:34:18.421109 1500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmk66\" (UniqueName: \"kubernetes.io/projected/ecbf3cf2-44ce-4aec-8f0f-5b2f8ef852b6-kube-api-access-bmk66\") pod \"hello-world-app-5d498dc89-q49v8\" (UID: \"ecbf3cf2-44ce-4aec-8f0f-5b2f8ef852b6\") " pod="default/hello-world-app-5d498dc89-q49v8"
==> storage-provisioner [5196916f6baf47b445b94972b7e511739075008df75b03baf2c42ddc38d8b404] <==
W1206 08:33:55.990027 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 08:33:57.994688 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 08:33:58.000702 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 08:34:00.003640 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 08:34:00.011749 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 08:34:02.015504 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 08:34:02.023560 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 08:34:04.028448 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 08:34:04.035652 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 08:34:06.039451 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 08:34:06.048741 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 08:34:08.052108 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 08:34:08.060531 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 08:34:10.064029 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 08:34:10.069881 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 08:34:12.074420 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 08:34:12.083709 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 08:34:14.087593 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 08:34:14.095594 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 08:34:16.099566 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 08:34:16.106775 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 08:34:18.111649 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 08:34:18.117834 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 08:34:20.122653 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 08:34:20.131771 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-618522 -n addons-618522
helpers_test.go:269: (dbg) Run: kubectl --context addons-618522 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-q49v8 ingress-nginx-admission-create-4lxk7 ingress-nginx-admission-patch-z9k7w
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context addons-618522 describe pod hello-world-app-5d498dc89-q49v8 ingress-nginx-admission-create-4lxk7 ingress-nginx-admission-patch-z9k7w
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-618522 describe pod hello-world-app-5d498dc89-q49v8 ingress-nginx-admission-create-4lxk7 ingress-nginx-admission-patch-z9k7w: exit status 1 (87.148474ms)
-- stdout --
Name: hello-world-app-5d498dc89-q49v8
Namespace: default
Priority: 0
Service Account: default
Node: addons-618522/192.168.39.168
Start Time: Sat, 06 Dec 2025 08:34:18 +0000
Labels: app=hello-world-app
pod-template-hash=5d498dc89
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-world-app-5d498dc89
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bmk66 (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-bmk66:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2s default-scheduler Successfully assigned default/hello-world-app-5d498dc89-q49v8 to addons-618522
Normal Pulling 1s kubelet Pulling image "docker.io/kicbase/echo-server:1.0"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-4lxk7" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-z9k7w" not found
** /stderr **
helpers_test.go:287: kubectl --context addons-618522 describe pod hello-world-app-5d498dc89-q49v8 ingress-nginx-admission-create-4lxk7 ingress-nginx-admission-patch-z9k7w: exit status 1
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-618522 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-618522 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-618522 addons disable ingress --alsologtostderr -v=1: (7.827608362s)
--- FAIL: TestAddons/parallel/Ingress (155.82s)