=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run: kubectl --context addons-301052 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run: kubectl --context addons-301052 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run: kubectl --context addons-301052 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [ef7f12e8-972f-418c-8608-d62b63b98950] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [ef7f12e8-972f-418c-8608-d62b63b98950] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004053876s
I1208 03:42:56.034351 129900 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run: out/minikube-linux-amd64 -p addons-301052 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-301052 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.588071322s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run: kubectl --context addons-301052 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run: out/minikube-linux-amd64 -p addons-301052 ip
addons_test.go:299: (dbg) Run: nslookup hello-john.test 192.168.39.103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-301052 -n addons-301052
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p addons-301052 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-301052 logs -n 25: (1.054055813s)
helpers_test.go:260: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ delete │ -p download-only-232951 │ download-only-232951 │ jenkins │ v1.37.0 │ 08 Dec 25 03:39 UTC │ 08 Dec 25 03:39 UTC │
│ start │ --download-only -p binary-mirror-485333 --alsologtostderr --binary-mirror http://127.0.0.1:38203 --driver=kvm2 --container-runtime=crio │ binary-mirror-485333 │ jenkins │ v1.37.0 │ 08 Dec 25 03:39 UTC │ │
│ delete │ -p binary-mirror-485333 │ binary-mirror-485333 │ jenkins │ v1.37.0 │ 08 Dec 25 03:39 UTC │ 08 Dec 25 03:39 UTC │
│ addons │ disable dashboard -p addons-301052 │ addons-301052 │ jenkins │ v1.37.0 │ 08 Dec 25 03:39 UTC │ │
│ addons │ enable dashboard -p addons-301052 │ addons-301052 │ jenkins │ v1.37.0 │ 08 Dec 25 03:39 UTC │ │
│ start │ -p addons-301052 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-301052 │ jenkins │ v1.37.0 │ 08 Dec 25 03:39 UTC │ 08 Dec 25 03:42 UTC │
│ addons │ addons-301052 addons disable volcano --alsologtostderr -v=1 │ addons-301052 │ jenkins │ v1.37.0 │ 08 Dec 25 03:42 UTC │ 08 Dec 25 03:42 UTC │
│ addons │ addons-301052 addons disable gcp-auth --alsologtostderr -v=1 │ addons-301052 │ jenkins │ v1.37.0 │ 08 Dec 25 03:42 UTC │ 08 Dec 25 03:42 UTC │
│ addons │ enable headlamp -p addons-301052 --alsologtostderr -v=1 │ addons-301052 │ jenkins │ v1.37.0 │ 08 Dec 25 03:42 UTC │ 08 Dec 25 03:42 UTC │
│ addons │ addons-301052 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-301052 │ jenkins │ v1.37.0 │ 08 Dec 25 03:42 UTC │ 08 Dec 25 03:42 UTC │
│ addons │ addons-301052 addons disable metrics-server --alsologtostderr -v=1 │ addons-301052 │ jenkins │ v1.37.0 │ 08 Dec 25 03:42 UTC │ 08 Dec 25 03:42 UTC │
│ ssh │ addons-301052 ssh cat /opt/local-path-provisioner/pvc-7dfb495a-6399-4db8-a94c-9302cbd53b7e_default_test-pvc/file1 │ addons-301052 │ jenkins │ v1.37.0 │ 08 Dec 25 03:42 UTC │ 08 Dec 25 03:42 UTC │
│ addons │ addons-301052 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-301052 │ jenkins │ v1.37.0 │ 08 Dec 25 03:42 UTC │ 08 Dec 25 03:43 UTC │
│ addons │ addons-301052 addons disable headlamp --alsologtostderr -v=1 │ addons-301052 │ jenkins │ v1.37.0 │ 08 Dec 25 03:42 UTC │ 08 Dec 25 03:42 UTC │
│ ip │ addons-301052 ip │ addons-301052 │ jenkins │ v1.37.0 │ 08 Dec 25 03:42 UTC │ 08 Dec 25 03:42 UTC │
│ addons │ addons-301052 addons disable registry --alsologtostderr -v=1 │ addons-301052 │ jenkins │ v1.37.0 │ 08 Dec 25 03:42 UTC │ 08 Dec 25 03:42 UTC │
│ addons │ addons-301052 addons disable yakd --alsologtostderr -v=1 │ addons-301052 │ jenkins │ v1.37.0 │ 08 Dec 25 03:42 UTC │ 08 Dec 25 03:42 UTC │
│ ssh │ addons-301052 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-301052 │ jenkins │ v1.37.0 │ 08 Dec 25 03:42 UTC │ │
│ addons │ addons-301052 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-301052 │ jenkins │ v1.37.0 │ 08 Dec 25 03:42 UTC │ 08 Dec 25 03:43 UTC │
│ addons │ addons-301052 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-301052 │ jenkins │ v1.37.0 │ 08 Dec 25 03:43 UTC │ 08 Dec 25 03:43 UTC │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-301052 │ addons-301052 │ jenkins │ v1.37.0 │ 08 Dec 25 03:43 UTC │ 08 Dec 25 03:43 UTC │
│ addons │ addons-301052 addons disable registry-creds --alsologtostderr -v=1 │ addons-301052 │ jenkins │ v1.37.0 │ 08 Dec 25 03:43 UTC │ 08 Dec 25 03:43 UTC │
│ addons │ addons-301052 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-301052 │ jenkins │ v1.37.0 │ 08 Dec 25 03:43 UTC │ 08 Dec 25 03:43 UTC │
│ addons │ addons-301052 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-301052 │ jenkins │ v1.37.0 │ 08 Dec 25 03:43 UTC │ 08 Dec 25 03:43 UTC │
│ ip │ addons-301052 ip │ addons-301052 │ jenkins │ v1.37.0 │ 08 Dec 25 03:45 UTC │ 08 Dec 25 03:45 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/08 03:39:52
Running on machine: ubuntu-20-agent-10
Binary: Built with gc go1.25.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1208 03:39:52.062784 130870 out.go:360] Setting OutFile to fd 1 ...
I1208 03:39:52.063091 130870 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 03:39:52.063102 130870 out.go:374] Setting ErrFile to fd 2...
I1208 03:39:52.063108 130870 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 03:39:52.063330 130870 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
I1208 03:39:52.063877 130870 out.go:368] Setting JSON to false
I1208 03:39:52.064772 130870 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1336,"bootTime":1765163856,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1208 03:39:52.064834 130870 start.go:143] virtualization: kvm guest
I1208 03:39:52.066681 130870 out.go:179] * [addons-301052] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1208 03:39:52.067918 130870 out.go:179] - MINIKUBE_LOCATION=21409
I1208 03:39:52.067958 130870 notify.go:221] Checking for updates...
I1208 03:39:52.070056 130870 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1208 03:39:52.071068 130870 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21409-125868/kubeconfig
I1208 03:39:52.072058 130870 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-125868/.minikube
I1208 03:39:52.073109 130870 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1208 03:39:52.074138 130870 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1208 03:39:52.075357 130870 driver.go:422] Setting default libvirt URI to qemu:///system
I1208 03:39:52.107727 130870 out.go:179] * Using the kvm2 driver based on user configuration
I1208 03:39:52.108828 130870 start.go:309] selected driver: kvm2
I1208 03:39:52.108843 130870 start.go:927] validating driver "kvm2" against <nil>
I1208 03:39:52.108855 130870 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1208 03:39:52.109633 130870 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1208 03:39:52.109875 130870 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1208 03:39:52.109919 130870 cni.go:84] Creating CNI manager for ""
I1208 03:39:52.109982 130870 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1208 03:39:52.109994 130870 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1208 03:39:52.110058 130870 start.go:353] cluster config:
{Name:addons-301052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-301052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
I1208 03:39:52.110186 130870 iso.go:125] acquiring lock: {Name:mkd550ce23b107beb8be7edee8182e09aac2818e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1208 03:39:52.111618 130870 out.go:179] * Starting "addons-301052" primary control-plane node in "addons-301052" cluster
I1208 03:39:52.112643 130870 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1208 03:39:52.112677 130870 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21409-125868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
I1208 03:39:52.112702 130870 cache.go:65] Caching tarball of preloaded images
I1208 03:39:52.112819 130870 preload.go:238] Found /home/jenkins/minikube-integration/21409-125868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1208 03:39:52.112833 130870 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
I1208 03:39:52.113259 130870 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/config.json ...
I1208 03:39:52.113290 130870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/config.json: {Name:mk0a5f52b95fc620886c94a38f9e732f44198aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1208 03:39:52.113507 130870 start.go:360] acquireMachinesLock for addons-301052: {Name:mka95432fbbe0b4b61b444ff6ef3750992988c0d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1208 03:39:52.113582 130870 start.go:364] duration metric: took 55.424µs to acquireMachinesLock for "addons-301052"
I1208 03:39:52.113608 130870 start.go:93] Provisioning new machine with config: &{Name:addons-301052 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-301052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1208 03:39:52.113683 130870 start.go:125] createHost starting for "" (driver="kvm2")
I1208 03:39:52.115030 130870 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1208 03:39:52.115184 130870 start.go:159] libmachine.API.Create for "addons-301052" (driver="kvm2")
I1208 03:39:52.115224 130870 client.go:173] LocalClient.Create starting
I1208 03:39:52.115356 130870 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21409-125868/.minikube/certs/ca.pem
I1208 03:39:52.190449 130870 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21409-125868/.minikube/certs/cert.pem
I1208 03:39:52.285135 130870 main.go:143] libmachine: creating domain...
I1208 03:39:52.285160 130870 main.go:143] libmachine: creating network...
I1208 03:39:52.286570 130870 main.go:143] libmachine: found existing default network
I1208 03:39:52.286788 130870 main.go:143] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1208 03:39:52.287305 130870 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d3a740}
I1208 03:39:52.287421 130870 main.go:143] libmachine: defining private network:
<network>
<name>mk-addons-301052</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1208 03:39:52.293294 130870 main.go:143] libmachine: creating private network mk-addons-301052 192.168.39.0/24...
I1208 03:39:52.362989 130870 main.go:143] libmachine: private network mk-addons-301052 192.168.39.0/24 created
I1208 03:39:52.363307 130870 main.go:143] libmachine: <network>
<name>mk-addons-301052</name>
<uuid>5a4d4462-57b6-4f17-b60d-4951aaa68ccb</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:da:82:a0'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1208 03:39:52.363353 130870 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052 ...
I1208 03:39:52.363377 130870 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21409-125868/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso
I1208 03:39:52.363389 130870 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21409-125868/.minikube
I1208 03:39:52.363492 130870 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21409-125868/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21409-125868/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso...
I1208 03:39:52.659378 130870 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa...
I1208 03:39:52.730466 130870 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/addons-301052.rawdisk...
I1208 03:39:52.730515 130870 main.go:143] libmachine: Writing magic tar header
I1208 03:39:52.730542 130870 main.go:143] libmachine: Writing SSH key tar header
I1208 03:39:52.730625 130870 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052 ...
I1208 03:39:52.730683 130870 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052
I1208 03:39:52.730706 130870 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052 (perms=drwx------)
I1208 03:39:52.730718 130870 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21409-125868/.minikube/machines
I1208 03:39:52.730727 130870 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21409-125868/.minikube/machines (perms=drwxr-xr-x)
I1208 03:39:52.730739 130870 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21409-125868/.minikube
I1208 03:39:52.730748 130870 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21409-125868/.minikube (perms=drwxr-xr-x)
I1208 03:39:52.730758 130870 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21409-125868
I1208 03:39:52.730767 130870 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21409-125868 (perms=drwxrwxr-x)
I1208 03:39:52.730779 130870 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1208 03:39:52.730789 130870 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1208 03:39:52.730798 130870 main.go:143] libmachine: checking permissions on dir: /home/jenkins
I1208 03:39:52.730808 130870 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1208 03:39:52.730817 130870 main.go:143] libmachine: checking permissions on dir: /home
I1208 03:39:52.730826 130870 main.go:143] libmachine: skipping /home - not owner
I1208 03:39:52.730831 130870 main.go:143] libmachine: defining domain...
I1208 03:39:52.732142 130870 main.go:143] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-301052</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/addons-301052.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-301052'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1208 03:39:52.739873 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:cd:f2:86 in network default
I1208 03:39:52.740548 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:39:52.740568 130870 main.go:143] libmachine: starting domain...
I1208 03:39:52.740573 130870 main.go:143] libmachine: ensuring networks are active...
I1208 03:39:52.741372 130870 main.go:143] libmachine: Ensuring network default is active
I1208 03:39:52.741792 130870 main.go:143] libmachine: Ensuring network mk-addons-301052 is active
I1208 03:39:52.742438 130870 main.go:143] libmachine: getting domain XML...
I1208 03:39:52.743573 130870 main.go:143] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-301052</name>
<uuid>e8d346d2-27a3-494e-bffe-43f0ee3efd1d</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/addons-301052.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:58:bd:9c'/>
<source network='mk-addons-301052'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:cd:f2:86'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1208 03:39:54.062727 130870 main.go:143] libmachine: waiting for domain to start...
I1208 03:39:54.064042 130870 main.go:143] libmachine: domain is now running
I1208 03:39:54.064061 130870 main.go:143] libmachine: waiting for IP...
I1208 03:39:54.064865 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:39:54.065288 130870 main.go:143] libmachine: no network interface addresses found for domain addons-301052 (source=lease)
I1208 03:39:54.065305 130870 main.go:143] libmachine: trying to list again with source=arp
I1208 03:39:54.065578 130870 main.go:143] libmachine: unable to find current IP address of domain addons-301052 in network mk-addons-301052 (interfaces detected: [])
I1208 03:39:54.065640 130870 retry.go:31] will retry after 235.409964ms: waiting for domain to come up
I1208 03:39:54.303125 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:39:54.303631 130870 main.go:143] libmachine: no network interface addresses found for domain addons-301052 (source=lease)
I1208 03:39:54.303653 130870 main.go:143] libmachine: trying to list again with source=arp
I1208 03:39:54.303918 130870 main.go:143] libmachine: unable to find current IP address of domain addons-301052 in network mk-addons-301052 (interfaces detected: [])
I1208 03:39:54.303972 130870 retry.go:31] will retry after 342.161147ms: waiting for domain to come up
I1208 03:39:54.647715 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:39:54.648373 130870 main.go:143] libmachine: no network interface addresses found for domain addons-301052 (source=lease)
I1208 03:39:54.648398 130870 main.go:143] libmachine: trying to list again with source=arp
I1208 03:39:54.648725 130870 main.go:143] libmachine: unable to find current IP address of domain addons-301052 in network mk-addons-301052 (interfaces detected: [])
I1208 03:39:54.648774 130870 retry.go:31] will retry after 327.760524ms: waiting for domain to come up
I1208 03:39:54.978285 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:39:54.978804 130870 main.go:143] libmachine: no network interface addresses found for domain addons-301052 (source=lease)
I1208 03:39:54.978819 130870 main.go:143] libmachine: trying to list again with source=arp
I1208 03:39:54.979103 130870 main.go:143] libmachine: unable to find current IP address of domain addons-301052 in network mk-addons-301052 (interfaces detected: [])
I1208 03:39:54.979147 130870 retry.go:31] will retry after 370.383597ms: waiting for domain to come up
I1208 03:39:55.350752 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:39:55.351279 130870 main.go:143] libmachine: no network interface addresses found for domain addons-301052 (source=lease)
I1208 03:39:55.351297 130870 main.go:143] libmachine: trying to list again with source=arp
I1208 03:39:55.351669 130870 main.go:143] libmachine: unable to find current IP address of domain addons-301052 in network mk-addons-301052 (interfaces detected: [])
I1208 03:39:55.351714 130870 retry.go:31] will retry after 716.591556ms: waiting for domain to come up
I1208 03:39:56.069747 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:39:56.070319 130870 main.go:143] libmachine: no network interface addresses found for domain addons-301052 (source=lease)
I1208 03:39:56.070336 130870 main.go:143] libmachine: trying to list again with source=arp
I1208 03:39:56.070628 130870 main.go:143] libmachine: unable to find current IP address of domain addons-301052 in network mk-addons-301052 (interfaces detected: [])
I1208 03:39:56.070667 130870 retry.go:31] will retry after 595.081797ms: waiting for domain to come up
I1208 03:39:56.667379 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:39:56.667927 130870 main.go:143] libmachine: no network interface addresses found for domain addons-301052 (source=lease)
I1208 03:39:56.667961 130870 main.go:143] libmachine: trying to list again with source=arp
I1208 03:39:56.668217 130870 main.go:143] libmachine: unable to find current IP address of domain addons-301052 in network mk-addons-301052 (interfaces detected: [])
I1208 03:39:56.668257 130870 retry.go:31] will retry after 782.672431ms: waiting for domain to come up
I1208 03:39:57.452489 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:39:57.453015 130870 main.go:143] libmachine: no network interface addresses found for domain addons-301052 (source=lease)
I1208 03:39:57.453034 130870 main.go:143] libmachine: trying to list again with source=arp
I1208 03:39:57.453333 130870 main.go:143] libmachine: unable to find current IP address of domain addons-301052 in network mk-addons-301052 (interfaces detected: [])
I1208 03:39:57.453377 130870 retry.go:31] will retry after 1.054589976s: waiting for domain to come up
I1208 03:39:58.509708 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:39:58.510329 130870 main.go:143] libmachine: no network interface addresses found for domain addons-301052 (source=lease)
I1208 03:39:58.510348 130870 main.go:143] libmachine: trying to list again with source=arp
I1208 03:39:58.510642 130870 main.go:143] libmachine: unable to find current IP address of domain addons-301052 in network mk-addons-301052 (interfaces detected: [])
I1208 03:39:58.510681 130870 retry.go:31] will retry after 1.806097252s: waiting for domain to come up
I1208 03:40:00.319679 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:00.320204 130870 main.go:143] libmachine: no network interface addresses found for domain addons-301052 (source=lease)
I1208 03:40:00.320223 130870 main.go:143] libmachine: trying to list again with source=arp
I1208 03:40:00.320504 130870 main.go:143] libmachine: unable to find current IP address of domain addons-301052 in network mk-addons-301052 (interfaces detected: [])
I1208 03:40:00.320549 130870 retry.go:31] will retry after 1.994021743s: waiting for domain to come up
I1208 03:40:02.316362 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:02.316943 130870 main.go:143] libmachine: no network interface addresses found for domain addons-301052 (source=lease)
I1208 03:40:02.316970 130870 main.go:143] libmachine: trying to list again with source=arp
I1208 03:40:02.317320 130870 main.go:143] libmachine: unable to find current IP address of domain addons-301052 in network mk-addons-301052 (interfaces detected: [])
I1208 03:40:02.317373 130870 retry.go:31] will retry after 1.993048808s: waiting for domain to come up
I1208 03:40:04.311748 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:04.312345 130870 main.go:143] libmachine: no network interface addresses found for domain addons-301052 (source=lease)
I1208 03:40:04.312370 130870 main.go:143] libmachine: trying to list again with source=arp
I1208 03:40:04.312641 130870 main.go:143] libmachine: unable to find current IP address of domain addons-301052 in network mk-addons-301052 (interfaces detected: [])
I1208 03:40:04.312677 130870 retry.go:31] will retry after 3.244643549s: waiting for domain to come up
I1208 03:40:07.559217 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:07.559745 130870 main.go:143] libmachine: domain addons-301052 has current primary IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:07.559759 130870 main.go:143] libmachine: found domain IP: 192.168.39.103
I1208 03:40:07.559781 130870 main.go:143] libmachine: reserving static IP address...
I1208 03:40:07.560170 130870 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-301052", mac: "52:54:00:58:bd:9c", ip: "192.168.39.103"} in network mk-addons-301052
I1208 03:40:07.754557 130870 main.go:143] libmachine: reserved static IP address 192.168.39.103 for domain addons-301052
I1208 03:40:07.754586 130870 main.go:143] libmachine: waiting for SSH...
I1208 03:40:07.754606 130870 main.go:143] libmachine: Getting to WaitForSSH function...
I1208 03:40:07.757706 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:07.758196 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:minikube Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:07.758233 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:07.758466 130870 main.go:143] libmachine: Using SSH client type: native
I1208 03:40:07.758834 130870 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.103 22 <nil> <nil>}
I1208 03:40:07.758852 130870 main.go:143] libmachine: About to run SSH command:
exit 0
I1208 03:40:07.871547 130870 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1208 03:40:07.872006 130870 main.go:143] libmachine: domain creation complete
I1208 03:40:07.873598 130870 machine.go:94] provisionDockerMachine start ...
I1208 03:40:07.875843 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:07.876263 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:07.876288 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:07.876458 130870 main.go:143] libmachine: Using SSH client type: native
I1208 03:40:07.876654 130870 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.103 22 <nil> <nil>}
I1208 03:40:07.876664 130870 main.go:143] libmachine: About to run SSH command:
hostname
I1208 03:40:07.985495 130870 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
I1208 03:40:07.985536 130870 buildroot.go:166] provisioning hostname "addons-301052"
I1208 03:40:07.988547 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:07.988947 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:07.988970 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:07.989150 130870 main.go:143] libmachine: Using SSH client type: native
I1208 03:40:07.989360 130870 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.103 22 <nil> <nil>}
I1208 03:40:07.989371 130870 main.go:143] libmachine: About to run SSH command:
sudo hostname addons-301052 && echo "addons-301052" | sudo tee /etc/hostname
I1208 03:40:08.116391 130870 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-301052
I1208 03:40:08.119377 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:08.119802 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:08.119839 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:08.120010 130870 main.go:143] libmachine: Using SSH client type: native
I1208 03:40:08.120209 130870 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.103 22 <nil> <nil>}
I1208 03:40:08.120230 130870 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-301052' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-301052/g' /etc/hosts;
else
echo '127.0.1.1 addons-301052' | sudo tee -a /etc/hosts;
fi
fi
I1208 03:40:08.240094 130870 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1208 03:40:08.240143 130870 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-125868/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-125868/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-125868/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-125868/.minikube}
I1208 03:40:08.240177 130870 buildroot.go:174] setting up certificates
I1208 03:40:08.240191 130870 provision.go:84] configureAuth start
I1208 03:40:08.243207 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:08.243574 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:08.243593 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:08.245767 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:08.246130 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:08.246150 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:08.246286 130870 provision.go:143] copyHostCerts
I1208 03:40:08.246366 130870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-125868/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-125868/.minikube/cert.pem (1123 bytes)
I1208 03:40:08.246507 130870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-125868/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-125868/.minikube/key.pem (1675 bytes)
I1208 03:40:08.246584 130870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-125868/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-125868/.minikube/ca.pem (1078 bytes)
I1208 03:40:08.246648 130870 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-125868/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-125868/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-125868/.minikube/certs/ca-key.pem org=jenkins.addons-301052 san=[127.0.0.1 192.168.39.103 addons-301052 localhost minikube]
I1208 03:40:08.275465 130870 provision.go:177] copyRemoteCerts
I1208 03:40:08.275525 130870 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1208 03:40:08.277996 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:08.278358 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:08.278379 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:08.278510 130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
I1208 03:40:08.365089 130870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1208 03:40:08.416344 130870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1208 03:40:08.446154 130870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1208 03:40:08.476033 130870 provision.go:87] duration metric: took 235.824192ms to configureAuth
I1208 03:40:08.476077 130870 buildroot.go:189] setting minikube options for container-runtime
I1208 03:40:08.476284 130870 config.go:182] Loaded profile config "addons-301052": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 03:40:08.479019 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:08.479528 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:08.479559 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:08.479781 130870 main.go:143] libmachine: Using SSH client type: native
I1208 03:40:08.480090 130870 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.103 22 <nil> <nil>}
I1208 03:40:08.480117 130870 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1208 03:40:08.727822 130870 main.go:143] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1208 03:40:08.727846 130870 machine.go:97] duration metric: took 854.230626ms to provisionDockerMachine
I1208 03:40:08.727858 130870 client.go:176] duration metric: took 16.612624215s to LocalClient.Create
I1208 03:40:08.727875 130870 start.go:167] duration metric: took 16.612692117s to libmachine.API.Create "addons-301052"
I1208 03:40:08.727883 130870 start.go:293] postStartSetup for "addons-301052" (driver="kvm2")
I1208 03:40:08.727892 130870 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1208 03:40:08.727995 130870 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1208 03:40:08.731128 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:08.731543 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:08.731566 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:08.731728 130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
I1208 03:40:08.817591 130870 ssh_runner.go:195] Run: cat /etc/os-release
I1208 03:40:08.822566 130870 info.go:137] Remote host: Buildroot 2025.02
I1208 03:40:08.822612 130870 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-125868/.minikube/addons for local assets ...
I1208 03:40:08.822730 130870 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-125868/.minikube/files for local assets ...
I1208 03:40:08.822769 130870 start.go:296] duration metric: took 94.879541ms for postStartSetup
I1208 03:40:08.825830 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:08.826277 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:08.826321 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:08.826561 130870 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/config.json ...
I1208 03:40:08.826781 130870 start.go:128] duration metric: took 16.713086736s to createHost
I1208 03:40:08.828818 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:08.829177 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:08.829202 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:08.829394 130870 main.go:143] libmachine: Using SSH client type: native
I1208 03:40:08.829602 130870 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.103 22 <nil> <nil>}
I1208 03:40:08.829611 130870 main.go:143] libmachine: About to run SSH command:
date +%s.%N
I1208 03:40:08.940402 130870 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765165208.895450329
I1208 03:40:08.940451 130870 fix.go:216] guest clock: 1765165208.895450329
I1208 03:40:08.940464 130870 fix.go:229] Guest: 2025-12-08 03:40:08.895450329 +0000 UTC Remote: 2025-12-08 03:40:08.826795401 +0000 UTC m=+16.814407780 (delta=68.654928ms)
I1208 03:40:08.940503 130870 fix.go:200] guest clock delta is within tolerance: 68.654928ms
I1208 03:40:08.940511 130870 start.go:83] releasing machines lock for "addons-301052", held for 16.826915901s
I1208 03:40:08.943284 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:08.943694 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:08.943719 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:08.944188 130870 ssh_runner.go:195] Run: cat /version.json
I1208 03:40:08.944254 130870 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1208 03:40:08.946920 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:08.947186 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:08.947260 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:08.947290 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:08.947433 130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
I1208 03:40:08.947602 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:08.947633 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:08.947788 130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
I1208 03:40:09.054357 130870 ssh_runner.go:195] Run: systemctl --version
I1208 03:40:09.060440 130870 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1208 03:40:09.217852 130870 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1208 03:40:09.224236 130870 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1208 03:40:09.224329 130870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1208 03:40:09.243867 130870 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1208 03:40:09.243892 130870 start.go:496] detecting cgroup driver to use...
I1208 03:40:09.243976 130870 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1208 03:40:09.262612 130870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1208 03:40:09.279740 130870 docker.go:218] disabling cri-docker service (if available) ...
I1208 03:40:09.279811 130870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1208 03:40:09.297398 130870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1208 03:40:09.314260 130870 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1208 03:40:09.463148 130870 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1208 03:40:09.667739 130870 docker.go:234] disabling docker service ...
I1208 03:40:09.667825 130870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1208 03:40:09.683863 130870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1208 03:40:09.699137 130870 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1208 03:40:09.863751 130870 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1208 03:40:10.003669 130870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1208 03:40:10.019046 130870 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1208 03:40:10.041047 130870 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1208 03:40:10.041112 130870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1208 03:40:10.053319 130870 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1208 03:40:10.053394 130870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1208 03:40:10.065972 130870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1208 03:40:10.078708 130870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1208 03:40:10.091330 130870 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1208 03:40:10.104664 130870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1208 03:40:10.117520 130870 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1208 03:40:10.138486 130870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1208 03:40:10.150598 130870 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1208 03:40:10.160961 130870 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1208 03:40:10.161020 130870 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1208 03:40:10.181340 130870 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1208 03:40:10.193118 130870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1208 03:40:10.333205 130870 ssh_runner.go:195] Run: sudo systemctl restart crio
I1208 03:40:10.447949 130870 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1208 03:40:10.448058 130870 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1208 03:40:10.453639 130870 start.go:564] Will wait 60s for crictl version
I1208 03:40:10.453738 130870 ssh_runner.go:195] Run: which crictl
I1208 03:40:10.457693 130870 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1208 03:40:10.492113 130870 start.go:580] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1208 03:40:10.492247 130870 ssh_runner.go:195] Run: crio --version
I1208 03:40:10.521693 130870 ssh_runner.go:195] Run: crio --version
I1208 03:40:10.554101 130870 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
I1208 03:40:10.558138 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:10.558578 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:10.558605 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:10.558841 130870 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1208 03:40:10.563488 130870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1208 03:40:10.578123 130870 kubeadm.go:884] updating cluster {Name:addons-301052 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-301052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1208 03:40:10.578266 130870 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1208 03:40:10.578314 130870 ssh_runner.go:195] Run: sudo crictl images --output json
I1208 03:40:10.607363 130870 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
I1208 03:40:10.607445 130870 ssh_runner.go:195] Run: which lz4
I1208 03:40:10.611490 130870 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1208 03:40:10.616298 130870 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1208 03:40:10.616340 130870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
I1208 03:40:11.813534 130870 crio.go:462] duration metric: took 1.2020774s to copy over tarball
I1208 03:40:11.813639 130870 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1208 03:40:13.246359 130870 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.43268676s)
I1208 03:40:13.246394 130870 crio.go:469] duration metric: took 1.432824376s to extract the tarball
I1208 03:40:13.246402 130870 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1208 03:40:13.283222 130870 ssh_runner.go:195] Run: sudo crictl images --output json
I1208 03:40:13.326298 130870 crio.go:514] all images are preloaded for cri-o runtime.
I1208 03:40:13.326333 130870 cache_images.go:86] Images are preloaded, skipping loading
I1208 03:40:13.326344 130870 kubeadm.go:935] updating node { 192.168.39.103 8443 v1.34.2 crio true true} ...
I1208 03:40:13.326476 130870 kubeadm.go:947] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-301052 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.103
[Install]
config:
{KubernetesVersion:v1.34.2 ClusterName:addons-301052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1208 03:40:13.326548 130870 ssh_runner.go:195] Run: crio config
I1208 03:40:13.373243 130870 cni.go:84] Creating CNI manager for ""
I1208 03:40:13.373279 130870 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1208 03:40:13.373300 130870 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1208 03:40:13.373324 130870 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.103 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-301052 NodeName:addons-301052 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.103"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.103 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1208 03:40:13.373448 130870 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.103
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-301052"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.103"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.103"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1208 03:40:13.373536 130870 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
I1208 03:40:13.385689 130870 binaries.go:51] Found k8s binaries, skipping transfer
I1208 03:40:13.385776 130870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1208 03:40:13.397925 130870 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
I1208 03:40:13.418234 130870 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1208 03:40:13.439356 130870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
I1208 03:40:13.460334 130870 ssh_runner.go:195] Run: grep 192.168.39.103 control-plane.minikube.internal$ /etc/hosts
I1208 03:40:13.464846 130870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.103 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1208 03:40:13.479657 130870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1208 03:40:13.617685 130870 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1208 03:40:13.636568 130870 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052 for IP: 192.168.39.103
I1208 03:40:13.636599 130870 certs.go:195] generating shared ca certs ...
I1208 03:40:13.636616 130870 certs.go:227] acquiring lock for ca certs: {Name:mkde290f016452b47757f4047e34e65b6d895da1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1208 03:40:13.636761 130870 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-125868/.minikube/ca.key
I1208 03:40:13.702170 130870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-125868/.minikube/ca.crt ...
I1208 03:40:13.702198 130870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-125868/.minikube/ca.crt: {Name:mke87be34c5c596f3cd382ba989ad1fa916992a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1208 03:40:13.702380 130870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-125868/.minikube/ca.key ...
I1208 03:40:13.702391 130870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-125868/.minikube/ca.key: {Name:mkb2ba9e512a7a853703c882645570892099bd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1208 03:40:13.702487 130870 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-125868/.minikube/proxy-client-ca.key
I1208 03:40:13.788118 130870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-125868/.minikube/proxy-client-ca.crt ...
I1208 03:40:13.788156 130870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-125868/.minikube/proxy-client-ca.crt: {Name:mk5f661ce8f8fdbed090c902672a423b18fef9cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1208 03:40:13.788345 130870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-125868/.minikube/proxy-client-ca.key ...
I1208 03:40:13.788357 130870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-125868/.minikube/proxy-client-ca.key: {Name:mk950c67bafa3f05c0edc38ab8b6f5935245787f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1208 03:40:13.788428 130870 certs.go:257] generating profile certs ...
I1208 03:40:13.788494 130870 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.key
I1208 03:40:13.788508 130870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt with IP's: []
I1208 03:40:13.890840 130870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt ...
I1208 03:40:13.890870 130870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: {Name:mk314789026b1cc69b0fe3b0cb95d601a54847f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1208 03:40:13.891049 130870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.key ...
I1208 03:40:13.891061 130870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.key: {Name:mk548c638f4510ca3c75d31fcb5f5d337a799c96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1208 03:40:13.891132 130870 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/apiserver.key.5f15a724
I1208 03:40:13.891152 130870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/apiserver.crt.5f15a724 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.103]
I1208 03:40:13.921322 130870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/apiserver.crt.5f15a724 ...
I1208 03:40:13.921353 130870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/apiserver.crt.5f15a724: {Name:mk3ca22ef41f82bdb96104cf5305fd506689b74e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1208 03:40:13.922061 130870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/apiserver.key.5f15a724 ...
I1208 03:40:13.922082 130870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/apiserver.key.5f15a724: {Name:mk3f059544f35a29b9c00dbddf8421936a1654af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1208 03:40:13.922639 130870 certs.go:382] copying /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/apiserver.crt.5f15a724 -> /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/apiserver.crt
I1208 03:40:13.922723 130870 certs.go:386] copying /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/apiserver.key.5f15a724 -> /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/apiserver.key
I1208 03:40:13.922775 130870 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/proxy-client.key
I1208 03:40:13.922795 130870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/proxy-client.crt with IP's: []
I1208 03:40:14.062021 130870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/proxy-client.crt ...
I1208 03:40:14.062055 130870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/proxy-client.crt: {Name:mk5cb75985139d01d8a0bdf7fa4fb3424ce2f6b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1208 03:40:14.062233 130870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/proxy-client.key ...
I1208 03:40:14.062247 130870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/proxy-client.key: {Name:mk3edfbda303f1b4afd4cf4b34ecda448800bb94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1208 03:40:14.062415 130870 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-125868/.minikube/certs/ca-key.pem (1675 bytes)
I1208 03:40:14.062457 130870 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-125868/.minikube/certs/ca.pem (1078 bytes)
I1208 03:40:14.062519 130870 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-125868/.minikube/certs/cert.pem (1123 bytes)
I1208 03:40:14.062552 130870 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-125868/.minikube/certs/key.pem (1675 bytes)
I1208 03:40:14.063273 130870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1208 03:40:14.094648 130870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1208 03:40:14.129045 130870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1208 03:40:14.161155 130870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1208 03:40:14.192130 130870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1208 03:40:14.224112 130870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1208 03:40:14.254590 130870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1208 03:40:14.285165 130870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1208 03:40:14.321442 130870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1208 03:40:14.360440 130870 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1208 03:40:14.389467 130870 ssh_runner.go:195] Run: openssl version
I1208 03:40:14.396097 130870 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1208 03:40:14.407873 130870 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1208 03:40:14.419479 130870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1208 03:40:14.424673 130870 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 8 03:40 /usr/share/ca-certificates/minikubeCA.pem
I1208 03:40:14.424739 130870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1208 03:40:14.432443 130870 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1208 03:40:14.444761 130870 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1208 03:40:14.456820 130870 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1208 03:40:14.461883 130870 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1208 03:40:14.461984 130870 kubeadm.go:401] StartCluster: {Name:addons-301052 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-301052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1208 03:40:14.462075 130870 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1208 03:40:14.462135 130870 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1208 03:40:14.496924 130870 cri.go:89] found id: ""
I1208 03:40:14.497016 130870 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1208 03:40:14.509502 130870 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1208 03:40:14.521606 130870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1208 03:40:14.533479 130870 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1208 03:40:14.533500 130870 kubeadm.go:158] found existing configuration files:
I1208 03:40:14.533548 130870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1208 03:40:14.544943 130870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1208 03:40:14.545005 130870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1208 03:40:14.556609 130870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1208 03:40:14.567391 130870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1208 03:40:14.567450 130870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1208 03:40:14.579641 130870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1208 03:40:14.590979 130870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1208 03:40:14.591042 130870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1208 03:40:14.603082 130870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1208 03:40:14.614391 130870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1208 03:40:14.614453 130870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1208 03:40:14.626517 130870 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1208 03:40:14.675560 130870 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
I1208 03:40:14.675629 130870 kubeadm.go:319] [preflight] Running pre-flight checks
I1208 03:40:14.768775 130870 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1208 03:40:14.768951 130870 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1208 03:40:14.769092 130870 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1208 03:40:14.780110 130870 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1208 03:40:14.784887 130870 out.go:252] - Generating certificates and keys ...
I1208 03:40:14.785075 130870 kubeadm.go:319] [certs] Using existing ca certificate authority
I1208 03:40:14.785167 130870 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1208 03:40:15.041717 130870 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1208 03:40:15.194374 130870 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1208 03:40:15.337015 130870 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1208 03:40:16.120015 130870 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1208 03:40:16.201047 130870 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1208 03:40:16.201430 130870 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-301052 localhost] and IPs [192.168.39.103 127.0.0.1 ::1]
I1208 03:40:16.312733 130870 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1208 03:40:16.312888 130870 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-301052 localhost] and IPs [192.168.39.103 127.0.0.1 ::1]
I1208 03:40:16.385567 130870 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1208 03:40:16.668853 130870 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1208 03:40:16.696797 130870 kubeadm.go:319] [certs] Generating "sa" key and public key
I1208 03:40:16.696919 130870 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1208 03:40:16.867880 130870 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1208 03:40:16.937367 130870 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1208 03:40:17.463543 130870 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1208 03:40:17.711004 130870 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1208 03:40:17.793853 130870 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1208 03:40:17.795737 130870 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1208 03:40:17.798386 130870 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1208 03:40:17.799961 130870 out.go:252] - Booting up control plane ...
I1208 03:40:17.800075 130870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1208 03:40:17.800206 130870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1208 03:40:17.801028 130870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1208 03:40:17.825288 130870 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1208 03:40:17.825451 130870 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1208 03:40:17.832098 130870 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1208 03:40:17.832303 130870 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1208 03:40:17.832486 130870 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1208 03:40:18.001833 130870 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1208 03:40:18.002037 130870 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1208 03:40:19.504388 130870 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.50496703s
I1208 03:40:19.510832 130870 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1208 03:40:19.511039 130870 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.103:8443/livez
I1208 03:40:19.511156 130870 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1208 03:40:19.511307 130870 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1208 03:40:21.957202 130870 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.448458106s
I1208 03:40:23.441841 130870 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.934618079s
I1208 03:40:26.508943 130870 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.003890572s
I1208 03:40:26.530740 130870 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1208 03:40:26.547786 130870 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1208 03:40:26.563795 130870 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1208 03:40:26.563991 130870 kubeadm.go:319] [mark-control-plane] Marking the node addons-301052 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1208 03:40:26.578781 130870 kubeadm.go:319] [bootstrap-token] Using token: 8vbi5u.3kekmhk202vogjki
I1208 03:40:26.579989 130870 out.go:252] - Configuring RBAC rules ...
I1208 03:40:26.580100 130870 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1208 03:40:26.587504 130870 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1208 03:40:26.597022 130870 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1208 03:40:26.601083 130870 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1208 03:40:26.607677 130870 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1208 03:40:26.614614 130870 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1208 03:40:26.918004 130870 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1208 03:40:27.375055 130870 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1208 03:40:27.916885 130870 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1208 03:40:27.917938 130870 kubeadm.go:319]
I1208 03:40:27.918004 130870 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1208 03:40:27.918011 130870 kubeadm.go:319]
I1208 03:40:27.918086 130870 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1208 03:40:27.918096 130870 kubeadm.go:319]
I1208 03:40:27.918130 130870 kubeadm.go:319] mkdir -p $HOME/.kube
I1208 03:40:27.918245 130870 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1208 03:40:27.918306 130870 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1208 03:40:27.918313 130870 kubeadm.go:319]
I1208 03:40:27.918359 130870 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1208 03:40:27.918365 130870 kubeadm.go:319]
I1208 03:40:27.918412 130870 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1208 03:40:27.918419 130870 kubeadm.go:319]
I1208 03:40:27.918482 130870 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1208 03:40:27.918595 130870 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1208 03:40:27.918694 130870 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1208 03:40:27.918709 130870 kubeadm.go:319]
I1208 03:40:27.918829 130870 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1208 03:40:27.918956 130870 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1208 03:40:27.918968 130870 kubeadm.go:319]
I1208 03:40:27.919077 130870 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8vbi5u.3kekmhk202vogjki \
I1208 03:40:27.919230 130870 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:0cd0b3eff0e159b3979f70cb18b8d13b2d72ebd098bd90cdc70e035975d60cfd \
I1208 03:40:27.919256 130870 kubeadm.go:319] --control-plane
I1208 03:40:27.919264 130870 kubeadm.go:319]
I1208 03:40:27.919370 130870 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1208 03:40:27.919385 130870 kubeadm.go:319]
I1208 03:40:27.919506 130870 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8vbi5u.3kekmhk202vogjki \
I1208 03:40:27.919638 130870 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:0cd0b3eff0e159b3979f70cb18b8d13b2d72ebd098bd90cdc70e035975d60cfd
I1208 03:40:27.921388 130870 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1208 03:40:27.921428 130870 cni.go:84] Creating CNI manager for ""
I1208 03:40:27.921437 130870 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1208 03:40:27.923121 130870 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1208 03:40:27.924486 130870 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1208 03:40:27.937429 130870 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1208 03:40:27.959832 130870 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1208 03:40:27.959963 130870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1208 03:40:27.959965 130870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-301052 minikube.k8s.io/updated_at=2025_12_08T03_40_27_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=730a0938e5fe3e95dced085e5e597b4345feecad minikube.k8s.io/name=addons-301052 minikube.k8s.io/primary=true
I1208 03:40:28.107276 130870 ops.go:34] apiserver oom_adj: -16
I1208 03:40:28.107331 130870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1208 03:40:28.607657 130870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1208 03:40:29.108281 130870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1208 03:40:29.607404 130870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1208 03:40:30.107683 130870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1208 03:40:30.607712 130870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1208 03:40:31.108352 130870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1208 03:40:31.608200 130870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1208 03:40:32.108330 130870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1208 03:40:32.203135 130870 kubeadm.go:1114] duration metric: took 4.243272452s to wait for elevateKubeSystemPrivileges
I1208 03:40:32.203186 130870 kubeadm.go:403] duration metric: took 17.741209566s to StartCluster
I1208 03:40:32.203214 130870 settings.go:142] acquiring lock: {Name:mk8cd1e38ee853efa0b11d6abb3aeb99916975f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1208 03:40:32.203995 130870 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/21409-125868/kubeconfig
I1208 03:40:32.204439 130870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-125868/kubeconfig: {Name:mk83f735c71f0681683d120e6684a264c50ab0a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1208 03:40:32.205164 130870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1208 03:40:32.205189 130870 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1208 03:40:32.205276 130870 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1208 03:40:32.205398 130870 config.go:182] Loaded profile config "addons-301052": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 03:40:32.205420 130870 addons.go:70] Setting registry-creds=true in profile "addons-301052"
I1208 03:40:32.205424 130870 addons.go:70] Setting registry=true in profile "addons-301052"
I1208 03:40:32.205424 130870 addons.go:70] Setting gcp-auth=true in profile "addons-301052"
I1208 03:40:32.205406 130870 addons.go:70] Setting yakd=true in profile "addons-301052"
I1208 03:40:32.205454 130870 addons.go:239] Setting addon registry=true in "addons-301052"
I1208 03:40:32.205464 130870 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-301052"
I1208 03:40:32.205476 130870 addons.go:70] Setting volcano=true in profile "addons-301052"
I1208 03:40:32.205468 130870 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-301052"
I1208 03:40:32.205489 130870 addons.go:239] Setting addon volcano=true in "addons-301052"
I1208 03:40:32.205489 130870 addons.go:70] Setting default-storageclass=true in profile "addons-301052"
I1208 03:40:32.205497 130870 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-301052"
I1208 03:40:32.205501 130870 addons.go:70] Setting ingress=true in profile "addons-301052"
I1208 03:40:32.205506 130870 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-301052"
I1208 03:40:32.205519 130870 addons.go:70] Setting ingress-dns=true in profile "addons-301052"
I1208 03:40:32.205529 130870 host.go:66] Checking if "addons-301052" exists ...
I1208 03:40:32.205457 130870 mustload.go:66] Loading cluster: addons-301052
I1208 03:40:32.205544 130870 addons.go:70] Setting storage-provisioner=true in profile "addons-301052"
I1208 03:40:32.205565 130870 addons.go:239] Setting addon storage-provisioner=true in "addons-301052"
I1208 03:40:32.205581 130870 addons.go:70] Setting cloud-spanner=true in profile "addons-301052"
I1208 03:40:32.205481 130870 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-301052"
I1208 03:40:32.205608 130870 addons.go:239] Setting addon cloud-spanner=true in "addons-301052"
I1208 03:40:32.205619 130870 addons.go:70] Setting metrics-server=true in profile "addons-301052"
I1208 03:40:32.205630 130870 addons.go:239] Setting addon metrics-server=true in "addons-301052"
I1208 03:40:32.205650 130870 host.go:66] Checking if "addons-301052" exists ...
I1208 03:40:32.205661 130870 addons.go:70] Setting volumesnapshots=true in profile "addons-301052"
I1208 03:40:32.205674 130870 addons.go:239] Setting addon volumesnapshots=true in "addons-301052"
I1208 03:40:32.205696 130870 host.go:66] Checking if "addons-301052" exists ...
I1208 03:40:32.205764 130870 config.go:182] Loaded profile config "addons-301052": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 03:40:32.206048 130870 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-301052"
I1208 03:40:32.206073 130870 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-301052"
I1208 03:40:32.206102 130870 host.go:66] Checking if "addons-301052" exists ...
I1208 03:40:32.205466 130870 addons.go:239] Setting addon yakd=true in "addons-301052"
I1208 03:40:32.206303 130870 host.go:66] Checking if "addons-301052" exists ...
I1208 03:40:32.205527 130870 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-301052"
I1208 03:40:32.206848 130870 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-301052"
I1208 03:40:32.205535 130870 host.go:66] Checking if "addons-301052" exists ...
I1208 03:40:32.205466 130870 addons.go:239] Setting addon registry-creds=true in "addons-301052"
I1208 03:40:32.206920 130870 host.go:66] Checking if "addons-301052" exists ...
I1208 03:40:32.205512 130870 addons.go:239] Setting addon ingress=true in "addons-301052"
I1208 03:40:32.207064 130870 host.go:66] Checking if "addons-301052" exists ...
I1208 03:40:32.207203 130870 out.go:179] * Verifying Kubernetes components...
I1208 03:40:32.205650 130870 host.go:66] Checking if "addons-301052" exists ...
I1208 03:40:32.205613 130870 host.go:66] Checking if "addons-301052" exists ...
I1208 03:40:32.205491 130870 addons.go:70] Setting inspektor-gadget=true in profile "addons-301052"
I1208 03:40:32.205535 130870 addons.go:239] Setting addon ingress-dns=true in "addons-301052"
I1208 03:40:32.206877 130870 host.go:66] Checking if "addons-301052" exists ...
I1208 03:40:32.205594 130870 host.go:66] Checking if "addons-301052" exists ...
I1208 03:40:32.208047 130870 addons.go:239] Setting addon inspektor-gadget=true in "addons-301052"
I1208 03:40:32.208106 130870 host.go:66] Checking if "addons-301052" exists ...
I1208 03:40:32.208325 130870 host.go:66] Checking if "addons-301052" exists ...
I1208 03:40:32.209819 130870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1208 03:40:32.211780 130870 host.go:66] Checking if "addons-301052" exists ...
I1208 03:40:32.214258 130870 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-301052"
I1208 03:40:32.214329 130870 host.go:66] Checking if "addons-301052" exists ...
I1208 03:40:32.214359 130870 addons.go:239] Setting addon default-storageclass=true in "addons-301052"
I1208 03:40:32.214400 130870 host.go:66] Checking if "addons-301052" exists ...
I1208 03:40:32.214616 130870 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1208 03:40:32.214654 130870 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
W1208 03:40:32.214724 130870 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1208 03:40:32.215767 130870 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
I1208 03:40:32.216373 130870 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1208 03:40:32.216379 130870 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1208 03:40:32.216415 130870 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1208 03:40:32.216733 130870 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1208 03:40:32.216428 130870 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1208 03:40:32.216453 130870 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.5
I1208 03:40:32.217305 130870 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1208 03:40:32.217980 130870 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
I1208 03:40:32.218076 130870 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1208 03:40:32.218595 130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1208 03:40:32.218693 130870 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1208 03:40:32.218722 130870 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1208 03:40:32.218725 130870 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
I1208 03:40:32.219099 130870 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1208 03:40:32.218728 130870 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1208 03:40:32.218731 130870 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1208 03:40:32.219319 130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1208 03:40:32.219329 130870 out.go:179] - Using image docker.io/registry:3.0.0
I1208 03:40:32.219340 130870 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
I1208 03:40:32.219341 130870 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
I1208 03:40:32.219482 130870 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
I1208 03:40:32.220018 130870 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1208 03:40:32.220623 130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1208 03:40:32.220650 130870 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
I1208 03:40:32.220665 130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1208 03:40:32.220234 130870 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1208 03:40:32.221055 130870 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1208 03:40:32.220670 130870 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1208 03:40:32.221188 130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1208 03:40:32.221279 130870 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
I1208 03:40:32.221328 130870 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1208 03:40:32.221570 130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1208 03:40:32.221333 130870 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
I1208 03:40:32.221666 130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1208 03:40:32.222002 130870 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1208 03:40:32.222004 130870 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1208 03:40:32.222065 130870 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
I1208 03:40:32.222346 130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
I1208 03:40:32.224204 130870 out.go:179] - Using image docker.io/busybox:stable
I1208 03:40:32.224214 130870 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
I1208 03:40:32.224206 130870 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1208 03:40:32.225357 130870 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1208 03:40:32.225371 130870 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1208 03:40:32.225399 130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1208 03:40:32.225423 130870 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1208 03:40:32.225440 130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1208 03:40:32.227514 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.227911 130870 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1208 03:40:32.228813 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.229950 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:32.229996 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.230710 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.230724 130870 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1208 03:40:32.231008 130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
I1208 03:40:32.231056 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:32.231106 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.231824 130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
I1208 03:40:32.231922 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.232374 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.232831 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:32.232875 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.233273 130870 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1208 03:40:32.233689 130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
I1208 03:40:32.233807 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:32.233842 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.234087 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.234285 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:32.234335 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.234401 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.234580 130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
I1208 03:40:32.235195 130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
I1208 03:40:32.235610 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.235661 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.235711 130870 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1208 03:40:32.235758 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:32.235785 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.236320 130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
I1208 03:40:32.236530 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.236616 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.236661 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:32.236712 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.236826 130870 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1208 03:40:32.236859 130870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1208 03:40:32.236891 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:32.236945 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.237074 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.237127 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:32.237157 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.237283 130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
I1208 03:40:32.237591 130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
I1208 03:40:32.237867 130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
I1208 03:40:32.237917 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:32.237953 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.237969 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:32.237998 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.238282 130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
I1208 03:40:32.238412 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.238796 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:32.238832 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.238823 130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
I1208 03:40:32.239076 130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
I1208 03:40:32.239131 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.239678 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:32.239710 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.239752 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:32.239795 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.239916 130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
I1208 03:40:32.240187 130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
I1208 03:40:32.241936 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.242386 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:32.242424 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:32.242644 130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
W1208 03:40:32.438639 130870 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:49342->192.168.39.103:22: read: connection reset by peer
I1208 03:40:32.438688 130870 retry.go:31] will retry after 312.584824ms: ssh: handshake failed: read tcp 192.168.39.1:49342->192.168.39.103:22: read: connection reset by peer
W1208 03:40:32.451493 130870 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:49364->192.168.39.103:22: read: connection reset by peer
I1208 03:40:32.451536 130870 retry.go:31] will retry after 275.869476ms: ssh: handshake failed: read tcp 192.168.39.1:49364->192.168.39.103:22: read: connection reset by peer
I1208 03:40:32.743413 130870 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1208 03:40:32.743486 130870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1208 03:40:32.903654 130870 node_ready.go:35] waiting up to 6m0s for node "addons-301052" to be "Ready" ...
I1208 03:40:32.910215 130870 node_ready.go:49] node "addons-301052" is "Ready"
I1208 03:40:32.910251 130870 node_ready.go:38] duration metric: took 6.557861ms for node "addons-301052" to be "Ready" ...
I1208 03:40:32.910268 130870 api_server.go:52] waiting for apiserver process to appear ...
I1208 03:40:32.910329 130870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1208 03:40:32.931744 130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1208 03:40:32.936423 130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1208 03:40:32.969859 130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1208 03:40:32.982756 130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1208 03:40:32.998149 130870 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1208 03:40:32.998178 130870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1208 03:40:33.000573 130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1208 03:40:33.006955 130870 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1208 03:40:33.006997 130870 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1208 03:40:33.014001 130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1208 03:40:33.029790 130870 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
I1208 03:40:33.029834 130870 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1208 03:40:33.051167 130870 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
I1208 03:40:33.051198 130870 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1208 03:40:33.068357 130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1208 03:40:33.082300 130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1208 03:40:33.179255 130870 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1208 03:40:33.179282 130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1208 03:40:33.275808 130870 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
I1208 03:40:33.275838 130870 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1208 03:40:33.281466 130870 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1208 03:40:33.281490 130870 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1208 03:40:33.285887 130870 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1208 03:40:33.285922 130870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1208 03:40:33.296230 130870 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
I1208 03:40:33.296254 130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1208 03:40:33.307094 130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1208 03:40:33.319025 130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
I1208 03:40:33.514770 130870 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1208 03:40:33.514804 130870 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1208 03:40:33.564857 130870 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
I1208 03:40:33.564886 130870 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1208 03:40:33.573720 130870 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1208 03:40:33.573754 130870 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1208 03:40:33.579531 130870 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1208 03:40:33.579563 130870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1208 03:40:33.584438 130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1208 03:40:33.918356 130870 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1208 03:40:33.918415 130870 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1208 03:40:33.987365 130870 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1208 03:40:33.987393 130870 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1208 03:40:34.025815 130870 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1208 03:40:34.025844 130870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1208 03:40:34.032583 130870 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
I1208 03:40:34.032610 130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1208 03:40:34.474803 130870 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1208 03:40:34.474838 130870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1208 03:40:34.506692 130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1208 03:40:34.605794 130870 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1208 03:40:34.605819 130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1208 03:40:34.620876 130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1208 03:40:35.104082 130870 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1208 03:40:35.104109 130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1208 03:40:35.229940 130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1208 03:40:35.574659 130870 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.664292162s)
I1208 03:40:35.574711 130870 api_server.go:72] duration metric: took 3.369490663s to wait for apiserver process to appear ...
I1208 03:40:35.574718 130870 api_server.go:88] waiting for apiserver healthz status ...
I1208 03:40:35.574758 130870 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
I1208 03:40:35.575015 130870 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.831498967s)
I1208 03:40:35.575047 130870 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1208 03:40:35.611851 130870 api_server.go:279] https://192.168.39.103:8443/healthz returned 200:
ok
I1208 03:40:35.621404 130870 api_server.go:141] control plane version: v1.34.2
I1208 03:40:35.621437 130870 api_server.go:131] duration metric: took 46.710506ms to wait for apiserver health ...
I1208 03:40:35.621447 130870 system_pods.go:43] waiting for kube-system pods to appear ...
I1208 03:40:35.717605 130870 system_pods.go:59] 10 kube-system pods found
I1208 03:40:35.717648 130870 system_pods.go:61] "amd-gpu-device-plugin-mn6gz" [34c7c111-d878-4bea-8f1c-64b08778e73b] Pending
I1208 03:40:35.717658 130870 system_pods.go:61] "coredns-66bc5c9577-wx9fk" [dac1139c-e1e9-46d1-9ba7-0d171fde95a2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1208 03:40:35.717665 130870 system_pods.go:61] "coredns-66bc5c9577-z7cr6" [f39adb02-b124-455f-b9aa-bb4f34c022f4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1208 03:40:35.717671 130870 system_pods.go:61] "etcd-addons-301052" [f848e815-7dfb-410e-9326-db452be103d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1208 03:40:35.717676 130870 system_pods.go:61] "kube-apiserver-addons-301052" [73a6eccb-55a9-46a4-a3e7-b9f83fe33aad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1208 03:40:35.717680 130870 system_pods.go:61] "kube-controller-manager-addons-301052" [deae2e02-cb37-4628-aa9d-6ee9a4756a1b] Running
I1208 03:40:35.717688 130870 system_pods.go:61] "kube-proxy-7c4kr" [7d58c183-e8c6-40c7-8fb9-3cc7bbc35eef] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1208 03:40:35.717698 130870 system_pods.go:61] "kube-scheduler-addons-301052" [bf8d885f-4cfd-4977-92b0-5afc4838c1fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1208 03:40:35.717704 130870 system_pods.go:61] "nvidia-device-plugin-daemonset-fvs49" [c8a1320a-c4e2-4f2a-be37-56390e503e79] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1208 03:40:35.717712 130870 system_pods.go:61] "registry-creds-764b6fb674-f84c9" [c16c8605-5ed0-4a5b-9291-123778fc160f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1208 03:40:35.717721 130870 system_pods.go:74] duration metric: took 96.267166ms to wait for pod list to return data ...
I1208 03:40:35.717733 130870 default_sa.go:34] waiting for default service account to be created ...
I1208 03:40:35.751354 130870 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1208 03:40:35.751395 130870 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1208 03:40:35.888471 130870 default_sa.go:45] found service account: "default"
I1208 03:40:35.888515 130870 default_sa.go:55] duration metric: took 170.774225ms for default service account to be created ...
I1208 03:40:35.888533 130870 system_pods.go:116] waiting for k8s-apps to be running ...
I1208 03:40:36.011976 130870 system_pods.go:86] 10 kube-system pods found
I1208 03:40:36.012015 130870 system_pods.go:89] "amd-gpu-device-plugin-mn6gz" [34c7c111-d878-4bea-8f1c-64b08778e73b] Pending
I1208 03:40:36.012026 130870 system_pods.go:89] "coredns-66bc5c9577-wx9fk" [dac1139c-e1e9-46d1-9ba7-0d171fde95a2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1208 03:40:36.012037 130870 system_pods.go:89] "coredns-66bc5c9577-z7cr6" [f39adb02-b124-455f-b9aa-bb4f34c022f4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1208 03:40:36.012048 130870 system_pods.go:89] "etcd-addons-301052" [f848e815-7dfb-410e-9326-db452be103d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1208 03:40:36.012064 130870 system_pods.go:89] "kube-apiserver-addons-301052" [73a6eccb-55a9-46a4-a3e7-b9f83fe33aad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1208 03:40:36.012070 130870 system_pods.go:89] "kube-controller-manager-addons-301052" [deae2e02-cb37-4628-aa9d-6ee9a4756a1b] Running
I1208 03:40:36.012081 130870 system_pods.go:89] "kube-proxy-7c4kr" [7d58c183-e8c6-40c7-8fb9-3cc7bbc35eef] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1208 03:40:36.012088 130870 system_pods.go:89] "kube-scheduler-addons-301052" [bf8d885f-4cfd-4977-92b0-5afc4838c1fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1208 03:40:36.012101 130870 system_pods.go:89] "nvidia-device-plugin-daemonset-fvs49" [c8a1320a-c4e2-4f2a-be37-56390e503e79] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1208 03:40:36.012113 130870 system_pods.go:89] "registry-creds-764b6fb674-f84c9" [c16c8605-5ed0-4a5b-9291-123778fc160f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1208 03:40:36.012133 130870 retry.go:31] will retry after 252.20739ms: missing components: kube-proxy
I1208 03:40:36.144047 130870 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-301052" context rescaled to 1 replicas
I1208 03:40:36.269064 130870 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1208 03:40:36.269099 130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1208 03:40:36.310617 130870 system_pods.go:86] 10 kube-system pods found
I1208 03:40:36.310654 130870 system_pods.go:89] "amd-gpu-device-plugin-mn6gz" [34c7c111-d878-4bea-8f1c-64b08778e73b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1208 03:40:36.310662 130870 system_pods.go:89] "coredns-66bc5c9577-wx9fk" [dac1139c-e1e9-46d1-9ba7-0d171fde95a2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1208 03:40:36.310670 130870 system_pods.go:89] "coredns-66bc5c9577-z7cr6" [f39adb02-b124-455f-b9aa-bb4f34c022f4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1208 03:40:36.310675 130870 system_pods.go:89] "etcd-addons-301052" [f848e815-7dfb-410e-9326-db452be103d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1208 03:40:36.310682 130870 system_pods.go:89] "kube-apiserver-addons-301052" [73a6eccb-55a9-46a4-a3e7-b9f83fe33aad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1208 03:40:36.310692 130870 system_pods.go:89] "kube-controller-manager-addons-301052" [deae2e02-cb37-4628-aa9d-6ee9a4756a1b] Running
I1208 03:40:36.310700 130870 system_pods.go:89] "kube-proxy-7c4kr" [7d58c183-e8c6-40c7-8fb9-3cc7bbc35eef] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1208 03:40:36.310708 130870 system_pods.go:89] "kube-scheduler-addons-301052" [bf8d885f-4cfd-4977-92b0-5afc4838c1fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1208 03:40:36.310717 130870 system_pods.go:89] "nvidia-device-plugin-daemonset-fvs49" [c8a1320a-c4e2-4f2a-be37-56390e503e79] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1208 03:40:36.310738 130870 system_pods.go:89] "registry-creds-764b6fb674-f84c9" [c16c8605-5ed0-4a5b-9291-123778fc160f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1208 03:40:36.310759 130870 retry.go:31] will retry after 364.474332ms: missing components: kube-proxy
I1208 03:40:36.689374 130870 system_pods.go:86] 10 kube-system pods found
I1208 03:40:36.689408 130870 system_pods.go:89] "amd-gpu-device-plugin-mn6gz" [34c7c111-d878-4bea-8f1c-64b08778e73b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1208 03:40:36.689416 130870 system_pods.go:89] "coredns-66bc5c9577-wx9fk" [dac1139c-e1e9-46d1-9ba7-0d171fde95a2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1208 03:40:36.689425 130870 system_pods.go:89] "coredns-66bc5c9577-z7cr6" [f39adb02-b124-455f-b9aa-bb4f34c022f4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1208 03:40:36.689429 130870 system_pods.go:89] "etcd-addons-301052" [f848e815-7dfb-410e-9326-db452be103d9] Running
I1208 03:40:36.689436 130870 system_pods.go:89] "kube-apiserver-addons-301052" [73a6eccb-55a9-46a4-a3e7-b9f83fe33aad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1208 03:40:36.689442 130870 system_pods.go:89] "kube-controller-manager-addons-301052" [deae2e02-cb37-4628-aa9d-6ee9a4756a1b] Running
I1208 03:40:36.689447 130870 system_pods.go:89] "kube-proxy-7c4kr" [7d58c183-e8c6-40c7-8fb9-3cc7bbc35eef] Running
I1208 03:40:36.689454 130870 system_pods.go:89] "kube-scheduler-addons-301052" [bf8d885f-4cfd-4977-92b0-5afc4838c1fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1208 03:40:36.689468 130870 system_pods.go:89] "nvidia-device-plugin-daemonset-fvs49" [c8a1320a-c4e2-4f2a-be37-56390e503e79] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1208 03:40:36.689484 130870 system_pods.go:89] "registry-creds-764b6fb674-f84c9" [c16c8605-5ed0-4a5b-9291-123778fc160f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1208 03:40:36.689496 130870 system_pods.go:126] duration metric: took 800.953885ms to wait for k8s-apps to be running ...
I1208 03:40:36.689506 130870 system_svc.go:44] waiting for kubelet service to be running ....
I1208 03:40:36.689582 130870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1208 03:40:36.848488 130870 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1208 03:40:36.848525 130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1208 03:40:37.396372 130870 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1208 03:40:37.396405 130870 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1208 03:40:37.649540 130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1208 03:40:38.642841 130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.706379695s)
I1208 03:40:38.645312 130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.713524243s)
I1208 03:40:38.935296 130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.965393864s)
I1208 03:40:38.935377 130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.952586125s)
I1208 03:40:39.331359 130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.330751583s)
I1208 03:40:39.331497 130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (6.317452994s)
I1208 03:40:39.331546 130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.26315561s)
I1208 03:40:39.331577 130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (6.249249786s)
I1208 03:40:39.669370 130870 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1208 03:40:39.672433 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:39.672990 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:39.673025 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:39.673242 130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
I1208 03:40:40.193825 130870 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1208 03:40:40.447167 130870 addons.go:239] Setting addon gcp-auth=true in "addons-301052"
I1208 03:40:40.447248 130870 host.go:66] Checking if "addons-301052" exists ...
I1208 03:40:40.449464 130870 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1208 03:40:40.452175 130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:40.452689 130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
I1208 03:40:40.452727 130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
I1208 03:40:40.452912 130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
I1208 03:40:41.774577 130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.467431785s)
I1208 03:40:41.774627 130870 addons.go:495] Verifying addon ingress=true in "addons-301052"
I1208 03:40:41.774669 130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (8.455603533s)
I1208 03:40:41.774717 130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.190248379s)
I1208 03:40:41.774828 130870 addons.go:495] Verifying addon registry=true in "addons-301052"
I1208 03:40:41.774748 130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.268024323s)
I1208 03:40:41.774806 130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.153903517s)
I1208 03:40:41.775449 130870 addons.go:495] Verifying addon metrics-server=true in "addons-301052"
I1208 03:40:41.776235 130870 out.go:179] * Verifying ingress addon...
I1208 03:40:41.776892 130870 out.go:179] * Verifying registry addon...
I1208 03:40:41.776892 130870 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-301052 service yakd-dashboard -n yakd-dashboard
I1208 03:40:41.778717 130870 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1208 03:40:41.779589 130870 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1208 03:40:41.813408 130870 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1208 03:40:41.813438 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:41.813659 130870 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1208 03:40:41.813677 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:41.984750 130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.754757799s)
I1208 03:40:41.984808 130870 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.295203418s)
W1208 03:40:41.984813 130870 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1208 03:40:41.984826 130870 system_svc.go:56] duration metric: took 5.295316828s WaitForService to wait for kubelet
I1208 03:40:41.984836 130870 retry.go:31] will retry after 209.91093ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1208 03:40:41.984836 130870 kubeadm.go:587] duration metric: took 9.779616044s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1208 03:40:41.984864 130870 node_conditions.go:102] verifying NodePressure condition ...
I1208 03:40:42.029586 130870 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1208 03:40:42.029622 130870 node_conditions.go:123] node cpu capacity is 2
I1208 03:40:42.029643 130870 node_conditions.go:105] duration metric: took 44.773768ms to run NodePressure ...
I1208 03:40:42.029657 130870 start.go:242] waiting for startup goroutines ...
I1208 03:40:42.195363 130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1208 03:40:42.289313 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:42.289418 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:42.787777 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:42.790607 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:43.205301 130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.55570028s)
I1208 03:40:43.205333 130870 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.755837836s)
I1208 03:40:43.205362 130870 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-301052"
I1208 03:40:43.206809 130870 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
I1208 03:40:43.206852 130870 out.go:179] * Verifying csi-hostpath-driver addon...
I1208 03:40:43.207891 130870 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1208 03:40:43.208801 130870 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1208 03:40:43.208935 130870 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1208 03:40:43.208962 130870 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1208 03:40:43.228431 130870 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1208 03:40:43.228460 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:43.299687 130870 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1208 03:40:43.299713 130870 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1208 03:40:43.308943 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:43.309511 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:43.409291 130870 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1208 03:40:43.409318 130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1208 03:40:43.506570 130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1208 03:40:43.712539 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:43.786535 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:43.786536 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:43.910370 130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.714959512s)
I1208 03:40:44.215449 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:44.284890 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:44.285469 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:44.661533 130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.154924671s)
I1208 03:40:44.662696 130870 addons.go:495] Verifying addon gcp-auth=true in "addons-301052"
I1208 03:40:44.664177 130870 out.go:179] * Verifying gcp-auth addon...
I1208 03:40:44.666590 130870 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1208 03:40:44.704578 130870 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1208 03:40:44.704616 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:44.747289 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:44.796969 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:44.796968 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:45.172448 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:45.214376 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:45.290230 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:45.290539 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:45.673420 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:45.713951 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:45.782340 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:45.788443 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:46.172110 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:46.212540 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:46.287677 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:46.290360 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:46.674477 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:46.714112 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:46.791117 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:46.792022 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:47.172300 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:47.216055 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:47.282308 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:47.283884 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:47.670827 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:47.714437 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:47.785132 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:47.786021 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:48.177111 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:48.275725 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:48.283468 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:48.284139 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:48.671468 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:48.771873 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:48.783071 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:48.783174 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:49.175169 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:49.212720 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:49.282764 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:49.283026 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:49.670809 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:49.713684 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:49.783134 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:49.783802 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:50.170365 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:50.213247 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:50.284383 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:50.285989 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:50.671709 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:50.713715 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:50.784350 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:50.784612 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:51.171112 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:51.215001 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:51.287752 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:51.288144 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:51.672418 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:51.714819 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:51.784169 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:51.784354 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:52.171233 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:52.214157 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:52.284112 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:52.287151 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:52.671630 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:52.714329 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:52.783572 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:52.784299 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:53.170282 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:53.213872 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:53.283211 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:53.283324 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:53.671442 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:53.718230 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:53.783203 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:53.784084 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:54.171190 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:54.272667 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:54.283419 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:54.283546 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:54.670750 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:54.713670 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:54.782911 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:54.783283 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:55.170468 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:55.213743 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:55.286206 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:55.287363 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:55.670794 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:55.714921 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:55.782351 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:55.783650 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:56.171553 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:56.213699 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:56.283130 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:56.283167 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:56.672050 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:56.713259 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:56.793129 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:56.793300 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:57.170991 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:57.219503 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:57.284048 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:57.284057 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:57.670477 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:57.714183 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:57.782767 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:57.783582 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:58.170240 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:58.213223 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:58.283784 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:58.285026 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:58.672729 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:58.713028 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:58.785618 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:58.785778 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:59.170247 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:59.213044 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:59.283078 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:40:59.283521 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:59.671276 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:40:59.714399 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:40:59.783445 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:40:59.783603 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:00.170975 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:00.216096 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:00.286297 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:41:00.286401 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:00.670955 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:00.714214 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:00.784351 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:41:00.784484 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:01.170476 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:01.213111 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:01.283462 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:41:01.283813 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:01.670371 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:01.713422 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:01.783488 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:01.784140 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:41:02.169994 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:02.212410 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:02.283599 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:41:02.284673 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:02.670846 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:02.713412 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:02.782244 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:02.783728 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:41:03.170367 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:03.212832 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:03.285353 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:03.286433 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:41:03.670749 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:03.714933 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:03.781721 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:03.783818 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:41:04.170884 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:04.217184 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:04.282243 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:04.285697 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:41:04.670536 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:04.713543 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:04.784340 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:04.784434 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:41:05.172671 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:05.216061 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:05.282697 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:05.282734 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1208 03:41:05.671684 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:05.715015 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:05.783660 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:05.786510 130870 kapi.go:107] duration metric: took 24.006918917s to wait for kubernetes.io/minikube-addons=registry ...
I1208 03:41:06.172743 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:06.215571 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:06.284552 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:06.672849 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:06.714124 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:06.783415 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:07.170011 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:07.212225 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:07.282939 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:07.670796 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:07.713420 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:07.784562 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:08.170590 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:08.213257 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:08.282488 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:08.670992 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:08.713846 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:08.782167 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:09.171182 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:09.212608 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:09.287380 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:09.672734 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:09.713397 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:09.783642 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:10.173207 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:10.213489 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:10.282301 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:10.672660 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:10.718652 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:10.783579 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:11.174135 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:11.217309 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:11.285166 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:11.671515 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:11.714207 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:11.783167 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:12.173395 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:12.215163 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:12.285555 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:12.670527 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:12.713455 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:12.784770 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:13.170768 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:13.214094 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:13.282894 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:13.670840 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:13.714432 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:13.783389 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:14.172359 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:14.213508 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:14.284624 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:14.674834 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:14.717856 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:14.787806 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:15.171593 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:15.218160 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:15.283279 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:15.671342 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:15.715032 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:15.784552 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:16.292312 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:16.294048 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:16.294357 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:16.674831 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:16.714637 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:16.786378 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:17.170532 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:17.215226 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:17.316145 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:17.670891 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:17.716422 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:17.782830 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:18.170264 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:18.214503 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:18.283877 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:18.670619 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:18.713191 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:18.782643 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:19.169783 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:19.213459 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:19.282826 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:19.672341 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:19.715389 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:19.783267 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:20.171154 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:20.214968 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:20.283770 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:20.671007 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:20.712803 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:20.783887 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:21.170592 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:21.213469 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:21.283165 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:21.670886 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:21.712729 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:21.782747 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:22.170408 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:22.213031 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:22.282344 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:22.671408 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:22.719523 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:22.782187 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:23.173004 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:23.213508 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:23.285374 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:23.672065 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:23.715711 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:23.783810 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:24.176310 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:24.213007 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:24.284400 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:24.672425 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:24.717081 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:24.789385 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:25.170824 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:25.214286 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:25.283012 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:25.670799 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:25.714093 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:25.782871 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:26.171481 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:26.212794 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:26.283913 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:26.670932 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:26.712837 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:26.782983 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:27.170961 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:27.212696 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:27.283245 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:27.670877 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:27.711967 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:27.785415 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:28.172040 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:28.212553 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:28.283744 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:28.674291 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:28.958205 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:28.967183 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:29.170193 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:29.216551 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:29.285092 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:29.672185 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:29.712667 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:29.783193 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:30.170794 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:30.213295 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:30.283794 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:30.670758 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:30.714189 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:30.782702 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:31.170821 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:31.214432 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:31.282563 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:31.672400 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:31.716300 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:31.785586 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:32.172715 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:32.215254 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:32.282677 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:32.671811 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:32.713954 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:32.783631 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:33.169488 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:33.213577 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:33.285197 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:33.673009 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:33.716728 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:33.784694 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:34.171844 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:34.213758 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:34.282911 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:34.674046 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:34.772793 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:34.873305 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:35.185296 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:35.215303 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:35.297468 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:35.673102 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:35.713833 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:35.782942 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:36.172045 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:36.218964 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:36.283567 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:36.671740 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:36.713536 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:36.782468 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:37.219614 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:37.219620 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:37.320025 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:37.670513 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:37.713098 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:37.782283 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:38.174341 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:38.213062 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:38.284279 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:38.671186 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:38.713042 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:38.782278 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:39.171210 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:39.212974 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:39.285081 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:39.673011 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:39.716044 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:39.782432 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:40.170186 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:40.216584 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:40.284687 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:40.675920 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:40.714976 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:40.790509 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:41.179441 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:41.220174 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:41.285379 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:41.675636 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:41.713775 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:41.786187 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:42.174451 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:42.223168 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:42.283877 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:42.671809 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:42.712727 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:42.784361 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:43.170756 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:43.219832 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:43.370851 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:43.671960 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:43.712843 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:43.781724 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:44.170656 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:44.214883 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:44.284613 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:44.672436 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:44.713464 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:44.784576 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:45.173858 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:45.217323 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:45.285232 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:45.671710 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:45.713145 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:45.784839 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:46.171867 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:46.217511 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:46.283707 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:46.672796 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:46.714260 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:46.786739 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:47.173100 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:47.215181 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:47.283487 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:47.672302 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:47.713400 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:47.785002 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:48.173144 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:48.215423 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:48.283991 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:48.671957 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:48.713286 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:48.782554 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:49.171695 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:49.216874 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:49.290998 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:49.671152 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:49.713667 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:49.783853 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:50.172562 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:50.217069 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:50.286103 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:50.671851 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:50.718024 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:50.783054 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:51.171076 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:51.214125 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:51.282605 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:51.670953 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:51.712840 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:51.783279 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:52.169574 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:52.214104 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:52.283931 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:52.670798 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:52.715513 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:52.783680 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:53.171390 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:53.213786 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:53.284676 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:53.670609 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:53.715540 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:53.787485 130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1208 03:41:54.172231 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:54.216543 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:54.285699 130870 kapi.go:107] duration metric: took 1m12.506977536s to wait for app.kubernetes.io/name=ingress-nginx ...
I1208 03:41:54.671524 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:54.772084 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:55.171178 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:55.212978 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1208 03:41:55.670990 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:55.712994 130870 kapi.go:107] duration metric: took 1m12.504195406s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1208 03:41:56.170748 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:56.670153 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:57.171963 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:57.707832 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:58.173736 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:58.670810 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:59.171453 130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1208 03:41:59.670719 130870 kapi.go:107] duration metric: took 1m15.004127101s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1208 03:41:59.672410 130870 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-301052 cluster.
I1208 03:41:59.673805 130870 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1208 03:41:59.674990 130870 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1208 03:41:59.676303 130870 out.go:179] * Enabled addons: ingress-dns, default-storageclass, cloud-spanner, storage-provisioner-rancher, storage-provisioner, registry-creds, nvidia-device-plugin, amd-gpu-device-plugin, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
I1208 03:41:59.677362 130870 addons.go:530] duration metric: took 1m27.472103741s for enable addons: enabled=[ingress-dns default-storageclass cloud-spanner storage-provisioner-rancher storage-provisioner registry-creds nvidia-device-plugin amd-gpu-device-plugin inspektor-gadget metrics-server yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
I1208 03:41:59.677413 130870 start.go:247] waiting for cluster config update ...
I1208 03:41:59.677438 130870 start.go:256] writing updated cluster config ...
I1208 03:41:59.677749 130870 ssh_runner.go:195] Run: rm -f paused
I1208 03:41:59.684150 130870 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1208 03:41:59.771653 130870 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z7cr6" in "kube-system" namespace to be "Ready" or be gone ...
I1208 03:41:59.777518 130870 pod_ready.go:94] pod "coredns-66bc5c9577-z7cr6" is "Ready"
I1208 03:41:59.777556 130870 pod_ready.go:86] duration metric: took 5.859172ms for pod "coredns-66bc5c9577-z7cr6" in "kube-system" namespace to be "Ready" or be gone ...
I1208 03:41:59.779641 130870 pod_ready.go:83] waiting for pod "etcd-addons-301052" in "kube-system" namespace to be "Ready" or be gone ...
I1208 03:41:59.785023 130870 pod_ready.go:94] pod "etcd-addons-301052" is "Ready"
I1208 03:41:59.785052 130870 pod_ready.go:86] duration metric: took 5.385993ms for pod "etcd-addons-301052" in "kube-system" namespace to be "Ready" or be gone ...
I1208 03:41:59.787089 130870 pod_ready.go:83] waiting for pod "kube-apiserver-addons-301052" in "kube-system" namespace to be "Ready" or be gone ...
I1208 03:41:59.791726 130870 pod_ready.go:94] pod "kube-apiserver-addons-301052" is "Ready"
I1208 03:41:59.791747 130870 pod_ready.go:86] duration metric: took 4.633015ms for pod "kube-apiserver-addons-301052" in "kube-system" namespace to be "Ready" or be gone ...
I1208 03:41:59.793689 130870 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-301052" in "kube-system" namespace to be "Ready" or be gone ...
I1208 03:42:00.089020 130870 pod_ready.go:94] pod "kube-controller-manager-addons-301052" is "Ready"
I1208 03:42:00.089050 130870 pod_ready.go:86] duration metric: took 295.34037ms for pod "kube-controller-manager-addons-301052" in "kube-system" namespace to be "Ready" or be gone ...
I1208 03:42:00.289428 130870 pod_ready.go:83] waiting for pod "kube-proxy-7c4kr" in "kube-system" namespace to be "Ready" or be gone ...
I1208 03:42:00.688662 130870 pod_ready.go:94] pod "kube-proxy-7c4kr" is "Ready"
I1208 03:42:00.688741 130870 pod_ready.go:86] duration metric: took 399.27566ms for pod "kube-proxy-7c4kr" in "kube-system" namespace to be "Ready" or be gone ...
I1208 03:42:00.888483 130870 pod_ready.go:83] waiting for pod "kube-scheduler-addons-301052" in "kube-system" namespace to be "Ready" or be gone ...
I1208 03:42:01.288759 130870 pod_ready.go:94] pod "kube-scheduler-addons-301052" is "Ready"
I1208 03:42:01.288787 130870 pod_ready.go:86] duration metric: took 400.265679ms for pod "kube-scheduler-addons-301052" in "kube-system" namespace to be "Ready" or be gone ...
I1208 03:42:01.288801 130870 pod_ready.go:40] duration metric: took 1.604610295s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1208 03:42:01.336886 130870 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
I1208 03:42:01.338697 130870 out.go:179] * Done! kubectl is now configured to use "addons-301052" cluster and "default" namespace by default
==> CRI-O <==
Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.749570341Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=93921074-51cc-4834-ba43-26abcaf5d7b1 name=/runtime.v1.RuntimeService/ListContainers
Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.749628989Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=93921074-51cc-4834-ba43-26abcaf5d7b1 name=/runtime.v1.RuntimeService/ListContainers
Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.749933474Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de67f2afb05a4dedc21ce9cdbdaaf459bb269898be655b5b5676945ee9a2f3cc,PodSandboxId:b20236486a064f4d7ef2a28f870cd90f014896bde817043f42731cec1fd882f5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765165368466538876,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef7f12e8-972f-418c-8608-d62b63b98950,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b595b0b25f31e2bf7cc4ffa9062177599906802d9176b3ae5c158d48a60373fb,PodSandboxId:91974d99084c7a40d619c8508001da4363ea98044a0b33e9fa4979c556ba3b73,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765165326072584510,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57b67a40-1452-43b4-aa1c-f17676388dbf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ce1989799411b2f7ff90e1e217092df8f96b814a3be72c666f51529d1b848c5,PodSandboxId:cf4fc12546f0fb1e06e9a075a90169b296e324f9d7b721cb1ef4156a9586ad37,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1765165313338332935,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-bj9np,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5ab5234c-77d2-4257-8bee-62465621b4de,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:30039ae77472cd3b95e457cd22e1c7bee1a5821218287aa2c29d2ee366316180,PodSandboxId:5d0a8f791eadef27f90da640c1c4ebf48f42e22e91ba0b1bb6e393adc8bf5321,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765165294456030896,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qdld5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 45eda127-bba9-4b4b-8273-e1dce8914f1a,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a4370f0039a1ced9b33f20410d8d5b16a4761e10ee26cc4164b5776b4b05f0,PodSandboxId:6445a397066abd00b34157ecef558f8b247ee30501b3281c32f4fc641a447fc8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765165292553601046,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ckkz4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 37f4923c-6ada-495e-a07b-092d1cac4632,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:645a868efff329f0d7e4253c6dbdc8e26cf5f08f22ab6bca7b2da1fdf6cf380d,PodSandboxId:ccc04dae84e8c23741e82d441ab38d72993390ff06b52ca61049e1cb31607097,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765165277206651224,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4969ec9d-ea71-402c-b994-d7d4204d91e6,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:467865c06eb44dc372c4b8a50808c67951897c5de5bea38de98416830d4ec56c,PodSandboxId:f2f0e02573dff400b702a48e25cad298f1876b70da2f222080e1e88f049a1db7,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765165256782420883,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-mn6gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c7c111-d878-4bea-8f1c-64b08778e73b,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709564618ae54f120db30996e41815d9eb651d09fafe7966c6d0727a3827f788,PodSandboxId:aab2aac1e6f835328cdaf7380abbd6dda8b63f2d623c5cd7fc5c60f5423eab69,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765165240871901120,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823c161b-93d0-4c6c-851d-3820d95a4ea5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45cf0e8ab6a77d2ea4dac6f4fa16358bec5dec74634e7ab68d5f46552d686d23,PodSandboxId:85ee96f590968746abaa3ba0a6191e42cc35d62501b75290c4e4af2a633d2eda,Metadata:&ContainerMetadata{Name:kube-proxy,Attemp
t:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765165234862765581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7c4kr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58c183-e8c6-40c7-8fb9-3cc7bbc35eef,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6e37ee755338b96f8d3c09b255384598ff0852611f882c9e070403eaebbd672,PodSandboxId:8933d5386813b6d7fa3f44699f7ca8058dd42659b6252646bfe5a3d1a1fb408d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a3
67cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765165234737566827,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z7cr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f39adb02-b124-455f-b9aa-bb4f34c022f4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fbb685fba1fcc30f7ad193348281fd444676bb14467bbaa07222dff97ff2905,PodSandboxId:80c4f0db2cc2717f43d6da71afb40f89645cfa67366cd3642a6bbe8910d183f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765165219762513901,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3596a1bf19d5e7e43177de11b99a68da,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc261a39952310881b167a21e143ae5b3e26f0c7805acbf8ab6c523a9702b42,PodSandboxId:af39ff2a0cefe208ebf470456ce01e1467af61396c8195cd2a06345262ae18c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765165219736024245,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42bd624619baae4c5f162c2e2b4c9559,},Annotations:map[string]string{io.kubernetes.container.hash: 53c
47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f320deceed20f49e1c4a0e65056562da25f2ed8f0f233fee06d3c8b77092ee9e,PodSandboxId:1afa54fabac5bbc94a9ecc9ef94a9792d3c58b7cc302d7681074bffbb31980f3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765165219727450267,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: f9c442eeafd13b0c55fc20762ee08821,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c033572e4caf3630aefd5cc91b1072ff48902921a38ae6bc4f74dc0e5a2deea,PodSandboxId:a900e2a6ac08ed5c078688583ba829877830a7e51c118eeb90e5cb3b11ed66aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765165219705978066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9487eb24755baafb7e85954efbf3df3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=93921074-51cc-4834-ba43-26abcaf5d7b1 name=/runtime.v1.RuntimeService/ListContainers
Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.783272293Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2911925e-5222-42cf-aa9a-d02ce274550f name=/runtime.v1.RuntimeService/Version
Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.783560258Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2911925e-5222-42cf-aa9a-d02ce274550f name=/runtime.v1.RuntimeService/Version
Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.784962092Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dd31bc55-357a-497d-bec7-2baf06989d90 name=/runtime.v1.ImageService/ImageFsInfo
Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.786198564Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765165509786173976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585495,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd31bc55-357a-497d-bec7-2baf06989d90 name=/runtime.v1.ImageService/ImageFsInfo
Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.787170040Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=048b3767-9429-4b71-904a-e0cae24f7e64 name=/runtime.v1.RuntimeService/ListContainers
Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.787236520Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=048b3767-9429-4b71-904a-e0cae24f7e64 name=/runtime.v1.RuntimeService/ListContainers
Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.787553153Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de67f2afb05a4dedc21ce9cdbdaaf459bb269898be655b5b5676945ee9a2f3cc,PodSandboxId:b20236486a064f4d7ef2a28f870cd90f014896bde817043f42731cec1fd882f5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765165368466538876,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef7f12e8-972f-418c-8608-d62b63b98950,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b595b0b25f31e2bf7cc4ffa9062177599906802d9176b3ae5c158d48a60373fb,PodSandboxId:91974d99084c7a40d619c8508001da4363ea98044a0b33e9fa4979c556ba3b73,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765165326072584510,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57b67a40-1452-43b4-aa1c-f17676388dbf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ce1989799411b2f7ff90e1e217092df8f96b814a3be72c666f51529d1b848c5,PodSandboxId:cf4fc12546f0fb1e06e9a075a90169b296e324f9d7b721cb1ef4156a9586ad37,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1765165313338332935,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-bj9np,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5ab5234c-77d2-4257-8bee-62465621b4de,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:30039ae77472cd3b95e457cd22e1c7bee1a5821218287aa2c29d2ee366316180,PodSandboxId:5d0a8f791eadef27f90da640c1c4ebf48f42e22e91ba0b1bb6e393adc8bf5321,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765165294456030896,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qdld5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 45eda127-bba9-4b4b-8273-e1dce8914f1a,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a4370f0039a1ced9b33f20410d8d5b16a4761e10ee26cc4164b5776b4b05f0,PodSandboxId:6445a397066abd00b34157ecef558f8b247ee30501b3281c32f4fc641a447fc8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765165292553601046,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ckkz4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 37f4923c-6ada-495e-a07b-092d1cac4632,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:645a868efff329f0d7e4253c6dbdc8e26cf5f08f22ab6bca7b2da1fdf6cf380d,PodSandboxId:ccc04dae84e8c23741e82d441ab38d72993390ff06b52ca61049e1cb31607097,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765165277206651224,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4969ec9d-ea71-402c-b994-d7d4204d91e6,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:467865c06eb44dc372c4b8a50808c67951897c5de5bea38de98416830d4ec56c,PodSandboxId:f2f0e02573dff400b702a48e25cad298f1876b70da2f222080e1e88f049a1db7,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765165256782420883,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-mn6gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c7c111-d878-4bea-8f1c-64b08778e73b,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709564618ae54f120db30996e41815d9eb651d09fafe7966c6d0727a3827f788,PodSandboxId:aab2aac1e6f835328cdaf7380abbd6dda8b63f2d623c5cd7fc5c60f5423eab69,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765165240871901120,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823c161b-93d0-4c6c-851d-3820d95a4ea5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45cf0e8ab6a77d2ea4dac6f4fa16358bec5dec74634e7ab68d5f46552d686d23,PodSandboxId:85ee96f590968746abaa3ba0a6191e42cc35d62501b75290c4e4af2a633d2eda,Metadata:&ContainerMetadata{Name:kube-proxy,Attemp
t:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765165234862765581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7c4kr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58c183-e8c6-40c7-8fb9-3cc7bbc35eef,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6e37ee755338b96f8d3c09b255384598ff0852611f882c9e070403eaebbd672,PodSandboxId:8933d5386813b6d7fa3f44699f7ca8058dd42659b6252646bfe5a3d1a1fb408d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a3
67cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765165234737566827,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z7cr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f39adb02-b124-455f-b9aa-bb4f34c022f4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fbb685fba1fcc30f7ad193348281fd444676bb14467bbaa07222dff97ff2905,PodSandboxId:80c4f0db2cc2717f43d6da71afb40f89645cfa67366cd3642a6bbe8910d183f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765165219762513901,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3596a1bf19d5e7e43177de11b99a68da,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc261a39952310881b167a21e143ae5b3e26f0c7805acbf8ab6c523a9702b42,PodSandboxId:af39ff2a0cefe208ebf470456ce01e1467af61396c8195cd2a06345262ae18c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765165219736024245,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42bd624619baae4c5f162c2e2b4c9559,},Annotations:map[string]string{io.kubernetes.container.hash: 53c
47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f320deceed20f49e1c4a0e65056562da25f2ed8f0f233fee06d3c8b77092ee9e,PodSandboxId:1afa54fabac5bbc94a9ecc9ef94a9792d3c58b7cc302d7681074bffbb31980f3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765165219727450267,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: f9c442eeafd13b0c55fc20762ee08821,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c033572e4caf3630aefd5cc91b1072ff48902921a38ae6bc4f74dc0e5a2deea,PodSandboxId:a900e2a6ac08ed5c078688583ba829877830a7e51c118eeb90e5cb3b11ed66aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765165219705978066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9487eb24755baafb7e85954efbf3df3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=048b3767-9429-4b71-904a-e0cae24f7e64 name=/runtime.v1.RuntimeService/ListContainers
Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.814836258Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=106cc81b-d282-48b4-adfe-25c432d791a7 name=/runtime.v1.RuntimeService/Version
Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.814928267Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=106cc81b-d282-48b4-adfe-25c432d791a7 name=/runtime.v1.RuntimeService/Version
Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.816100573Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b653365c-bfb3-4e20-b8b6-b08697d773c5 name=/runtime.v1.ImageService/ImageFsInfo
Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.817563722Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765165509817537475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585495,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b653365c-bfb3-4e20-b8b6-b08697d773c5 name=/runtime.v1.ImageService/ImageFsInfo
Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.818404195Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc9085f3-9ddf-4c13-8ce4-d09bd50a1663 name=/runtime.v1.RuntimeService/ListContainers
Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.818532690Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc9085f3-9ddf-4c13-8ce4-d09bd50a1663 name=/runtime.v1.RuntimeService/ListContainers
Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.819092883Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de67f2afb05a4dedc21ce9cdbdaaf459bb269898be655b5b5676945ee9a2f3cc,PodSandboxId:b20236486a064f4d7ef2a28f870cd90f014896bde817043f42731cec1fd882f5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765165368466538876,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef7f12e8-972f-418c-8608-d62b63b98950,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b595b0b25f31e2bf7cc4ffa9062177599906802d9176b3ae5c158d48a60373fb,PodSandboxId:91974d99084c7a40d619c8508001da4363ea98044a0b33e9fa4979c556ba3b73,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765165326072584510,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57b67a40-1452-43b4-aa1c-f17676388dbf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ce1989799411b2f7ff90e1e217092df8f96b814a3be72c666f51529d1b848c5,PodSandboxId:cf4fc12546f0fb1e06e9a075a90169b296e324f9d7b721cb1ef4156a9586ad37,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1765165313338332935,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-bj9np,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5ab5234c-77d2-4257-8bee-62465621b4de,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:30039ae77472cd3b95e457cd22e1c7bee1a5821218287aa2c29d2ee366316180,PodSandboxId:5d0a8f791eadef27f90da640c1c4ebf48f42e22e91ba0b1bb6e393adc8bf5321,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765165294456030896,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qdld5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 45eda127-bba9-4b4b-8273-e1dce8914f1a,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a4370f0039a1ced9b33f20410d8d5b16a4761e10ee26cc4164b5776b4b05f0,PodSandboxId:6445a397066abd00b34157ecef558f8b247ee30501b3281c32f4fc641a447fc8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765165292553601046,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ckkz4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 37f4923c-6ada-495e-a07b-092d1cac4632,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:645a868efff329f0d7e4253c6dbdc8e26cf5f08f22ab6bca7b2da1fdf6cf380d,PodSandboxId:ccc04dae84e8c23741e82d441ab38d72993390ff06b52ca61049e1cb31607097,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765165277206651224,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4969ec9d-ea71-402c-b994-d7d4204d91e6,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:467865c06eb44dc372c4b8a50808c67951897c5de5bea38de98416830d4ec56c,PodSandboxId:f2f0e02573dff400b702a48e25cad298f1876b70da2f222080e1e88f049a1db7,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765165256782420883,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-mn6gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c7c111-d878-4bea-8f1c-64b08778e73b,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709564618ae54f120db30996e41815d9eb651d09fafe7966c6d0727a3827f788,PodSandboxId:aab2aac1e6f835328cdaf7380abbd6dda8b63f2d623c5cd7fc5c60f5423eab69,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765165240871901120,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823c161b-93d0-4c6c-851d-3820d95a4ea5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45cf0e8ab6a77d2ea4dac6f4fa16358bec5dec74634e7ab68d5f46552d686d23,PodSandboxId:85ee96f590968746abaa3ba0a6191e42cc35d62501b75290c4e4af2a633d2eda,Metadata:&ContainerMetadata{Name:kube-proxy,Attemp
t:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765165234862765581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7c4kr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58c183-e8c6-40c7-8fb9-3cc7bbc35eef,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6e37ee755338b96f8d3c09b255384598ff0852611f882c9e070403eaebbd672,PodSandboxId:8933d5386813b6d7fa3f44699f7ca8058dd42659b6252646bfe5a3d1a1fb408d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a3
67cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765165234737566827,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z7cr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f39adb02-b124-455f-b9aa-bb4f34c022f4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fbb685fba1fcc30f7ad193348281fd444676bb14467bbaa07222dff97ff2905,PodSandboxId:80c4f0db2cc2717f43d6da71afb40f89645cfa67366cd3642a6bbe8910d183f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765165219762513901,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3596a1bf19d5e7e43177de11b99a68da,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc261a39952310881b167a21e143ae5b3e26f0c7805acbf8ab6c523a9702b42,PodSandboxId:af39ff2a0cefe208ebf470456ce01e1467af61396c8195cd2a06345262ae18c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765165219736024245,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42bd624619baae4c5f162c2e2b4c9559,},Annotations:map[string]string{io.kubernetes.container.hash: 53c
47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f320deceed20f49e1c4a0e65056562da25f2ed8f0f233fee06d3c8b77092ee9e,PodSandboxId:1afa54fabac5bbc94a9ecc9ef94a9792d3c58b7cc302d7681074bffbb31980f3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765165219727450267,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: f9c442eeafd13b0c55fc20762ee08821,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c033572e4caf3630aefd5cc91b1072ff48902921a38ae6bc4f74dc0e5a2deea,PodSandboxId:a900e2a6ac08ed5c078688583ba829877830a7e51c118eeb90e5cb3b11ed66aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765165219705978066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9487eb24755baafb7e85954efbf3df3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc9085f3-9ddf-4c13-8ce4-d09bd50a1663 name=/runtime.v1.RuntimeService/ListContainers
Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.847119771Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad90c914-fe9e-4d10-90b9-d5f9c1515e88 name=/runtime.v1.RuntimeService/Version
Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.847200914Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad90c914-fe9e-4d10-90b9-d5f9c1515e88 name=/runtime.v1.RuntimeService/Version
Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.848914773Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=793ec751-1b92-4667-bf48-30d727895dce name=/runtime.v1.ImageService/ImageFsInfo
Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.850553197Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765165509850529126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585495,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=793ec751-1b92-4667-bf48-30d727895dce name=/runtime.v1.ImageService/ImageFsInfo
Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.851593488Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5678fee9-15eb-40d4-bb58-1f3acce6524b name=/runtime.v1.RuntimeService/ListContainers
Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.851759881Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5678fee9-15eb-40d4-bb58-1f3acce6524b name=/runtime.v1.RuntimeService/ListContainers
Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.852245104Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de67f2afb05a4dedc21ce9cdbdaaf459bb269898be655b5b5676945ee9a2f3cc,PodSandboxId:b20236486a064f4d7ef2a28f870cd90f014896bde817043f42731cec1fd882f5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765165368466538876,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef7f12e8-972f-418c-8608-d62b63b98950,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b595b0b25f31e2bf7cc4ffa9062177599906802d9176b3ae5c158d48a60373fb,PodSandboxId:91974d99084c7a40d619c8508001da4363ea98044a0b33e9fa4979c556ba3b73,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765165326072584510,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57b67a40-1452-43b4-aa1c-f17676388dbf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ce1989799411b2f7ff90e1e217092df8f96b814a3be72c666f51529d1b848c5,PodSandboxId:cf4fc12546f0fb1e06e9a075a90169b296e324f9d7b721cb1ef4156a9586ad37,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1765165313338332935,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-bj9np,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5ab5234c-77d2-4257-8bee-62465621b4de,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:30039ae77472cd3b95e457cd22e1c7bee1a5821218287aa2c29d2ee366316180,PodSandboxId:5d0a8f791eadef27f90da640c1c4ebf48f42e22e91ba0b1bb6e393adc8bf5321,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765165294456030896,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qdld5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 45eda127-bba9-4b4b-8273-e1dce8914f1a,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a4370f0039a1ced9b33f20410d8d5b16a4761e10ee26cc4164b5776b4b05f0,PodSandboxId:6445a397066abd00b34157ecef558f8b247ee30501b3281c32f4fc641a447fc8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765165292553601046,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ckkz4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 37f4923c-6ada-495e-a07b-092d1cac4632,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:645a868efff329f0d7e4253c6dbdc8e26cf5f08f22ab6bca7b2da1fdf6cf380d,PodSandboxId:ccc04dae84e8c23741e82d441ab38d72993390ff06b52ca61049e1cb31607097,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765165277206651224,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4969ec9d-ea71-402c-b994-d7d4204d91e6,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:467865c06eb44dc372c4b8a50808c67951897c5de5bea38de98416830d4ec56c,PodSandboxId:f2f0e02573dff400b702a48e25cad298f1876b70da2f222080e1e88f049a1db7,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765165256782420883,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-mn6gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c7c111-d878-4bea-8f1c-64b08778e73b,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709564618ae54f120db30996e41815d9eb651d09fafe7966c6d0727a3827f788,PodSandboxId:aab2aac1e6f835328cdaf7380abbd6dda8b63f2d623c5cd7fc5c60f5423eab69,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765165240871901120,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823c161b-93d0-4c6c-851d-3820d95a4ea5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45cf0e8ab6a77d2ea4dac6f4fa16358bec5dec74634e7ab68d5f46552d686d23,PodSandboxId:85ee96f590968746abaa3ba0a6191e42cc35d62501b75290c4e4af2a633d2eda,Metadata:&ContainerMetadata{Name:kube-proxy,Attemp
t:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765165234862765581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7c4kr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58c183-e8c6-40c7-8fb9-3cc7bbc35eef,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6e37ee755338b96f8d3c09b255384598ff0852611f882c9e070403eaebbd672,PodSandboxId:8933d5386813b6d7fa3f44699f7ca8058dd42659b6252646bfe5a3d1a1fb408d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a3
67cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765165234737566827,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z7cr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f39adb02-b124-455f-b9aa-bb4f34c022f4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fbb685fba1fcc30f7ad193348281fd444676bb14467bbaa07222dff97ff2905,PodSandboxId:80c4f0db2cc2717f43d6da71afb40f89645cfa67366cd3642a6bbe8910d183f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765165219762513901,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3596a1bf19d5e7e43177de11b99a68da,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc261a39952310881b167a21e143ae5b3e26f0c7805acbf8ab6c523a9702b42,PodSandboxId:af39ff2a0cefe208ebf470456ce01e1467af61396c8195cd2a06345262ae18c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765165219736024245,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42bd624619baae4c5f162c2e2b4c9559,},Annotations:map[string]string{io.kubernetes.container.hash: 53c
47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f320deceed20f49e1c4a0e65056562da25f2ed8f0f233fee06d3c8b77092ee9e,PodSandboxId:1afa54fabac5bbc94a9ecc9ef94a9792d3c58b7cc302d7681074bffbb31980f3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765165219727450267,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: f9c442eeafd13b0c55fc20762ee08821,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c033572e4caf3630aefd5cc91b1072ff48902921a38ae6bc4f74dc0e5a2deea,PodSandboxId:a900e2a6ac08ed5c078688583ba829877830a7e51c118eeb90e5cb3b11ed66aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765165219705978066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9487eb24755baafb7e85954efbf3df3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5678fee9-15eb-40d4-bb58-1f3acce6524b name=/runtime.v1.RuntimeService/ListContainers
Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.874315725Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/1.0" file="docker/docker_client.go:631"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
de67f2afb05a4 docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 2 minutes ago Running nginx 0 b20236486a064 nginx default
b595b0b25f31e gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 3 minutes ago Running busybox 0 91974d99084c7 busybox default
1ce1989799411 registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27 3 minutes ago Running controller 0 cf4fc12546f0f ingress-nginx-controller-6c8bf45fb-bj9np ingress-nginx
30039ae77472c registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f 3 minutes ago Exited patch 0 5d0a8f791eade ingress-nginx-admission-patch-qdld5 ingress-nginx
c7a4370f0039a registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f 3 minutes ago Exited create 0 6445a397066ab ingress-nginx-admission-create-ckkz4 ingress-nginx
645a868efff32 docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 3 minutes ago Running minikube-ingress-dns 0 ccc04dae84e8c kube-ingress-dns-minikube kube-system
467865c06eb44 docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 f2f0e02573dff amd-gpu-device-plugin-mn6gz kube-system
709564618ae54 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 aab2aac1e6f83 storage-provisioner kube-system
45cf0e8ab6a77 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45 4 minutes ago Running kube-proxy 0 85ee96f590968 kube-proxy-7c4kr kube-system
a6e37ee755338 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969 4 minutes ago Running coredns 0 8933d5386813b coredns-66bc5c9577-z7cr6 kube-system
2fbb685fba1fc 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952 4 minutes ago Running kube-scheduler 0 80c4f0db2cc27 kube-scheduler-addons-301052 kube-system
0dc261a399523 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8 4 minutes ago Running kube-controller-manager 0 af39ff2a0cefe kube-controller-manager-addons-301052 kube-system
f320deceed20f a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85 4 minutes ago Running kube-apiserver 0 1afa54fabac5b kube-apiserver-addons-301052 kube-system
2c033572e4caf a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1 4 minutes ago Running etcd 0 a900e2a6ac08e etcd-addons-301052 kube-system
==> coredns [a6e37ee755338b96f8d3c09b255384598ff0852611f882c9e070403eaebbd672] <==
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
[INFO] Reloading complete
[INFO] 127.0.0.1:34647 - 51955 "HINFO IN 5210262041541038279.2713969031462723015. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.037564941s
[INFO] 10.244.0.23:47444 - 6140 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000460634s
[INFO] 10.244.0.23:52156 - 32041 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000120084s
[INFO] 10.244.0.23:43492 - 18720 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124677s
[INFO] 10.244.0.23:40893 - 49998 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000145391s
[INFO] 10.244.0.23:48441 - 748 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090052s
[INFO] 10.244.0.23:37366 - 782 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000133158s
[INFO] 10.244.0.23:36217 - 48086 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003291914s
[INFO] 10.244.0.23:42213 - 27919 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.003908309s
[INFO] 10.244.0.28:51164 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000343688s
[INFO] 10.244.0.28:52181 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000144018s
==> describe nodes <==
Name: addons-301052
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-301052
kubernetes.io/os=linux
minikube.k8s.io/commit=730a0938e5fe3e95dced085e5e597b4345feecad
minikube.k8s.io/name=addons-301052
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_08T03_40_27_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-301052
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 08 Dec 2025 03:40:23 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-301052
AcquireTime: <unset>
RenewTime: Mon, 08 Dec 2025 03:45:03 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 08 Dec 2025 03:43:31 +0000 Mon, 08 Dec 2025 03:40:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 08 Dec 2025 03:43:31 +0000 Mon, 08 Dec 2025 03:40:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 08 Dec 2025 03:43:31 +0000 Mon, 08 Dec 2025 03:40:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 08 Dec 2025 03:43:31 +0000 Mon, 08 Dec 2025 03:40:28 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.103
Hostname: addons-301052
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
System Info:
Machine ID: e8d346d227a3494ebffe43f0ee3efd1d
System UUID: e8d346d2-27a3-494e-bffe-43f0ee3efd1d
Boot ID: 6a6149ae-760d-4566-bc6c-1aa8f15648d4
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.34.2
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (13 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m9s
default hello-world-app-5d498dc89-sdslz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m27s
ingress-nginx ingress-nginx-controller-6c8bf45fb-bj9np 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m30s
kube-system amd-gpu-device-plugin-mn6gz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m35s
kube-system coredns-66bc5c9577-z7cr6 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m38s
kube-system etcd-addons-301052 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m43s
kube-system kube-apiserver-addons-301052 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m44s
kube-system kube-controller-manager-addons-301052 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m44s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m32s
kube-system kube-proxy-7c4kr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m38s
kube-system kube-scheduler-addons-301052 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m43s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m31s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m30s kube-proxy
Normal Starting 4m43s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m43s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m43s kubelet Node addons-301052 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m43s kubelet Node addons-301052 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m43s kubelet Node addons-301052 status is now: NodeHasSufficientPID
Normal NodeReady 4m42s kubelet Node addons-301052 status is now: NodeReady
Normal RegisteredNode 4m39s node-controller Node addons-301052 event: Registered Node addons-301052 in Controller
==> dmesg <==
[ +0.484163] kauditd_printk_skb: 285 callbacks suppressed
[ +1.342782] kauditd_printk_skb: 395 callbacks suppressed
[ +7.622901] kauditd_printk_skb: 305 callbacks suppressed
[Dec 8 03:41] kauditd_printk_skb: 26 callbacks suppressed
[ +6.647693] kauditd_printk_skb: 26 callbacks suppressed
[ +9.728423] kauditd_printk_skb: 23 callbacks suppressed
[ +9.035294] kauditd_printk_skb: 20 callbacks suppressed
[ +5.019597] kauditd_printk_skb: 80 callbacks suppressed
[ +1.006170] kauditd_printk_skb: 115 callbacks suppressed
[ +4.719225] kauditd_printk_skb: 88 callbacks suppressed
[ +0.000113] kauditd_printk_skb: 130 callbacks suppressed
[ +5.070195] kauditd_printk_skb: 62 callbacks suppressed
[Dec 8 03:42] kauditd_printk_skb: 17 callbacks suppressed
[ +13.097318] kauditd_printk_skb: 53 callbacks suppressed
[ +0.000780] kauditd_printk_skb: 22 callbacks suppressed
[ +1.302931] kauditd_printk_skb: 107 callbacks suppressed
[ +4.078293] kauditd_printk_skb: 69 callbacks suppressed
[ +0.455995] kauditd_printk_skb: 120 callbacks suppressed
[ +3.739923] kauditd_printk_skb: 156 callbacks suppressed
[ +2.507060] kauditd_printk_skb: 85 callbacks suppressed
[ +4.423570] kauditd_printk_skb: 32 callbacks suppressed
[Dec 8 03:43] kauditd_printk_skb: 30 callbacks suppressed
[ +0.000282] kauditd_printk_skb: 62 callbacks suppressed
[ +7.845661] kauditd_printk_skb: 41 callbacks suppressed
[Dec 8 03:45] kauditd_printk_skb: 127 callbacks suppressed
==> etcd [2c033572e4caf3630aefd5cc91b1072ff48902921a38ae6bc4f74dc0e5a2deea] <==
{"level":"warn","ts":"2025-12-08T03:41:28.960130Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"214.427586ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-08T03:41:28.960216Z","caller":"traceutil/trace.go:172","msg":"trace[1044816213] range","detail":"{range_begin:/registry/controllerrevisions; range_end:; response_count:0; response_revision:995; }","duration":"215.383474ms","start":"2025-12-08T03:41:28.744782Z","end":"2025-12-08T03:41:28.960165Z","steps":["trace[1044816213] 'agreement among raft nodes before linearized reading' (duration: 214.405714ms)"],"step_count":1}
{"level":"info","ts":"2025-12-08T03:41:28.957153Z","caller":"traceutil/trace.go:172","msg":"trace[1754549274] transaction","detail":"{read_only:false; response_revision:995; number_of_response:1; }","duration":"149.428813ms","start":"2025-12-08T03:41:28.807712Z","end":"2025-12-08T03:41:28.957141Z","steps":["trace[1754549274] 'process raft request' (duration: 149.307863ms)"],"step_count":1}
{"level":"info","ts":"2025-12-08T03:41:37.213105Z","caller":"traceutil/trace.go:172","msg":"trace[1935213483] linearizableReadLoop","detail":"{readStateIndex:1072; appliedIndex:1072; }","duration":"119.245874ms","start":"2025-12-08T03:41:37.093797Z","end":"2025-12-08T03:41:37.213042Z","steps":["trace[1935213483] 'read index received' (duration: 119.241123ms)","trace[1935213483] 'applied index is now lower than readState.Index' (duration: 4.101µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-08T03:41:37.213274Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.460924ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-qdld5\" limit:1 ","response":"range_response_count:1 size:4635"}
{"level":"info","ts":"2025-12-08T03:41:37.213295Z","caller":"traceutil/trace.go:172","msg":"trace[40001368] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-qdld5; range_end:; response_count:1; response_revision:1038; }","duration":"119.495779ms","start":"2025-12-08T03:41:37.093793Z","end":"2025-12-08T03:41:37.213288Z","steps":["trace[40001368] 'agreement among raft nodes before linearized reading' (duration: 119.429041ms)"],"step_count":1}
{"level":"info","ts":"2025-12-08T03:41:37.213320Z","caller":"traceutil/trace.go:172","msg":"trace[660356740] transaction","detail":"{read_only:false; response_revision:1039; number_of_response:1; }","duration":"161.806946ms","start":"2025-12-08T03:41:37.051500Z","end":"2025-12-08T03:41:37.213307Z","steps":["trace[660356740] 'process raft request' (duration: 161.621073ms)"],"step_count":1}
{"level":"info","ts":"2025-12-08T03:41:43.359857Z","caller":"traceutil/trace.go:172","msg":"trace[981194662] linearizableReadLoop","detail":"{readStateIndex:1121; appliedIndex:1121; }","duration":"121.880397ms","start":"2025-12-08T03:41:43.237959Z","end":"2025-12-08T03:41:43.359840Z","steps":["trace[981194662] 'read index received' (duration: 121.874993ms)","trace[981194662] 'applied index is now lower than readState.Index' (duration: 4.496µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-08T03:41:43.360232Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.256205ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deviceclasses\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-08T03:41:43.360252Z","caller":"traceutil/trace.go:172","msg":"trace[668778472] range","detail":"{range_begin:/registry/deviceclasses; range_end:; response_count:0; response_revision:1087; }","duration":"122.292274ms","start":"2025-12-08T03:41:43.237955Z","end":"2025-12-08T03:41:43.360247Z","steps":["trace[668778472] 'agreement among raft nodes before linearized reading' (duration: 122.235464ms)"],"step_count":1}
{"level":"info","ts":"2025-12-08T03:41:43.360032Z","caller":"traceutil/trace.go:172","msg":"trace[1186481515] transaction","detail":"{read_only:false; response_revision:1087; number_of_response:1; }","duration":"144.396794ms","start":"2025-12-08T03:41:43.215627Z","end":"2025-12-08T03:41:43.360024Z","steps":["trace[1186481515] 'process raft request' (duration: 144.291876ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-08T03:41:43.361976Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.221559ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
{"level":"info","ts":"2025-12-08T03:41:43.362089Z","caller":"traceutil/trace.go:172","msg":"trace[2133983913] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1087; }","duration":"102.343255ms","start":"2025-12-08T03:41:43.259738Z","end":"2025-12-08T03:41:43.362081Z","steps":["trace[2133983913] 'agreement among raft nodes before linearized reading' (duration: 100.712678ms)"],"step_count":1}
{"level":"info","ts":"2025-12-08T03:41:51.612520Z","caller":"traceutil/trace.go:172","msg":"trace[1302055486] transaction","detail":"{read_only:false; response_revision:1128; number_of_response:1; }","duration":"161.966721ms","start":"2025-12-08T03:41:51.450535Z","end":"2025-12-08T03:41:51.612501Z","steps":["trace[1302055486] 'process raft request' (duration: 161.857757ms)"],"step_count":1}
{"level":"info","ts":"2025-12-08T03:42:31.093365Z","caller":"traceutil/trace.go:172","msg":"trace[1724223708] transaction","detail":"{read_only:false; response_revision:1367; number_of_response:1; }","duration":"122.922679ms","start":"2025-12-08T03:42:30.970422Z","end":"2025-12-08T03:42:31.093345Z","steps":["trace[1724223708] 'process raft request' (duration: 122.505545ms)"],"step_count":1}
{"level":"info","ts":"2025-12-08T03:42:36.269763Z","caller":"traceutil/trace.go:172","msg":"trace[1827662002] linearizableReadLoop","detail":"{readStateIndex:1465; appliedIndex:1465; }","duration":"136.967741ms","start":"2025-12-08T03:42:36.132776Z","end":"2025-12-08T03:42:36.269743Z","steps":["trace[1827662002] 'read index received' (duration: 136.960938ms)","trace[1827662002] 'applied index is now lower than readState.Index' (duration: 5.804µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-08T03:42:36.270129Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"137.350849ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" limit:1 ","response":"range_response_count:1 size:1837"}
{"level":"info","ts":"2025-12-08T03:42:36.270168Z","caller":"traceutil/trace.go:172","msg":"trace[2087791428] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:1; response_revision:1419; }","duration":"137.406646ms","start":"2025-12-08T03:42:36.132754Z","end":"2025-12-08T03:42:36.270161Z","steps":["trace[2087791428] 'agreement among raft nodes before linearized reading' (duration: 137.198202ms)"],"step_count":1}
{"level":"info","ts":"2025-12-08T03:42:36.391570Z","caller":"traceutil/trace.go:172","msg":"trace[967568169] linearizableReadLoop","detail":"{readStateIndex:1466; appliedIndex:1466; }","duration":"121.546101ms","start":"2025-12-08T03:42:36.269934Z","end":"2025-12-08T03:42:36.391480Z","steps":["trace[967568169] 'read index received' (duration: 121.537815ms)","trace[967568169] 'applied index is now lower than readState.Index' (duration: 7.299µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-08T03:42:36.394264Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"200.519786ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-08T03:42:36.394310Z","caller":"traceutil/trace.go:172","msg":"trace[1145301337] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1419; }","duration":"200.575376ms","start":"2025-12-08T03:42:36.193723Z","end":"2025-12-08T03:42:36.394299Z","steps":["trace[1145301337] 'agreement among raft nodes before linearized reading' (duration: 197.995443ms)"],"step_count":1}
{"level":"info","ts":"2025-12-08T03:42:36.394624Z","caller":"traceutil/trace.go:172","msg":"trace[1506426832] transaction","detail":"{read_only:false; response_revision:1420; number_of_response:1; }","duration":"235.532098ms","start":"2025-12-08T03:42:36.159079Z","end":"2025-12-08T03:42:36.394611Z","steps":["trace[1506426832] 'process raft request' (duration: 232.851485ms)"],"step_count":1}
{"level":"info","ts":"2025-12-08T03:42:36.395132Z","caller":"traceutil/trace.go:172","msg":"trace[1359952185] transaction","detail":"{read_only:false; response_revision:1421; number_of_response:1; }","duration":"123.703148ms","start":"2025-12-08T03:42:36.271421Z","end":"2025-12-08T03:42:36.395125Z","steps":["trace[1359952185] 'process raft request' (duration: 123.534023ms)"],"step_count":1}
{"level":"info","ts":"2025-12-08T03:42:59.768980Z","caller":"traceutil/trace.go:172","msg":"trace[1462135902] transaction","detail":"{read_only:false; response_revision:1602; number_of_response:1; }","duration":"262.956918ms","start":"2025-12-08T03:42:59.506009Z","end":"2025-12-08T03:42:59.768966Z","steps":["trace[1462135902] 'process raft request' (duration: 262.846353ms)"],"step_count":1}
{"level":"info","ts":"2025-12-08T03:43:00.966183Z","caller":"traceutil/trace.go:172","msg":"trace[145528908] transaction","detail":"{read_only:false; response_revision:1605; number_of_response:1; }","duration":"229.308063ms","start":"2025-12-08T03:43:00.736831Z","end":"2025-12-08T03:43:00.966140Z","steps":["trace[145528908] 'process raft request' (duration: 229.124099ms)"],"step_count":1}
==> kernel <==
03:45:10 up 5 min, 0 users, load average: 0.43, 1.08, 0.57
Linux addons-301052 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec 8 03:04:10 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [f320deceed20f49e1c4a0e65056562da25f2ed8f0f233fee06d3c8b77092ee9e] <==
> logger="UnhandledError"
E1208 03:41:09.377557 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.254.20:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.254.20:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.254.20:443: connect: connection refused" logger="UnhandledError"
E1208 03:41:09.380696 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.254.20:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.254.20:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.254.20:443: connect: connection refused" logger="UnhandledError"
E1208 03:41:09.384993 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.254.20:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.254.20:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.254.20:443: connect: connection refused" logger="UnhandledError"
I1208 03:41:09.446180 1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E1208 03:42:12.124444 1 conn.go:339] Error on socket receive: read tcp 192.168.39.103:8443->192.168.39.1:51470: use of closed network connection
E1208 03:42:12.325659 1 conn.go:339] Error on socket receive: read tcp 192.168.39.103:8443->192.168.39.1:51494: use of closed network connection
I1208 03:42:21.706011 1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.158.175"}
I1208 03:42:43.836123 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
I1208 03:42:44.020999 1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.108.137"}
E1208 03:42:52.636881 1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
I1208 03:43:07.071869 1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I1208 03:43:10.397173 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I1208 03:43:23.496974 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1208 03:43:23.497153 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1208 03:43:23.531784 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1208 03:43:23.531869 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1208 03:43:23.556603 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1208 03:43:23.556688 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1208 03:43:23.571614 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1208 03:43:23.571686 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W1208 03:43:24.537250 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W1208 03:43:24.572339 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W1208 03:43:24.707382 1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
I1208 03:45:08.851154 1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.2.84"}
==> kube-controller-manager [0dc261a39952310881b167a21e143ae5b3e26f0c7805acbf8ab6c523a9702b42] <==
I1208 03:43:31.541957 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
E1208 03:43:31.963718 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1208 03:43:31.964728 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1208 03:43:32.622744 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1208 03:43:32.623852 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1208 03:43:32.695003 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1208 03:43:32.696165 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1208 03:43:38.650743 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1208 03:43:38.651947 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1208 03:43:41.814835 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1208 03:43:41.816016 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1208 03:43:44.111713 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1208 03:43:44.113678 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1208 03:44:01.340791 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1208 03:44:01.342117 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1208 03:44:03.944639 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1208 03:44:03.945850 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1208 03:44:07.200476 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1208 03:44:07.201534 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1208 03:44:48.138841 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1208 03:44:48.139791 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1208 03:44:53.200843 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1208 03:44:53.202377 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1208 03:44:56.089729 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1208 03:44:56.090763 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
==> kube-proxy [45cf0e8ab6a77d2ea4dac6f4fa16358bec5dec74634e7ab68d5f46552d686d23] <==
I1208 03:40:36.779581 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1208 03:40:36.944295 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1208 03:40:36.990006 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.103"]
E1208 03:40:37.009525 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1208 03:40:39.423183 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1208 03:40:39.423265 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1208 03:40:39.423291 1 server_linux.go:132] "Using iptables Proxier"
I1208 03:40:39.857599 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1208 03:40:39.874623 1 server.go:527] "Version info" version="v1.34.2"
I1208 03:40:39.874646 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1208 03:40:39.905164 1 config.go:106] "Starting endpoint slice config controller"
I1208 03:40:39.905382 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1208 03:40:39.907757 1 config.go:403] "Starting serviceCIDR config controller"
I1208 03:40:39.907794 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1208 03:40:39.917954 1 config.go:200] "Starting service config controller"
I1208 03:40:39.917983 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1208 03:40:39.942230 1 config.go:309] "Starting node config controller"
I1208 03:40:39.942280 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1208 03:40:39.942295 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1208 03:40:40.010773 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1208 03:40:40.025226 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1208 03:40:40.119149 1 shared_informer.go:356] "Caches are synced" controller="service config"
==> kube-scheduler [2fbb685fba1fcc30f7ad193348281fd444676bb14467bbaa07222dff97ff2905] <==
E1208 03:40:23.435354 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1208 03:40:23.439526 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1208 03:40:23.440154 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1208 03:40:23.442275 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1208 03:40:23.442382 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1208 03:40:23.442465 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1208 03:40:23.442529 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1208 03:40:23.443529 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1208 03:40:23.443820 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1208 03:40:23.444088 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1208 03:40:23.444958 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1208 03:40:24.269930 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1208 03:40:24.312228 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1208 03:40:24.317228 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1208 03:40:24.359477 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1208 03:40:24.372956 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1208 03:40:24.577154 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1208 03:40:24.645724 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1208 03:40:24.681942 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1208 03:40:24.695657 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1208 03:40:24.718369 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1208 03:40:24.722477 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1208 03:40:24.766774 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1208 03:40:24.915435 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
I1208 03:40:26.820411 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Dec 08 03:43:31 addons-301052 kubelet[1497]: I1208 03:43:31.133010 1497 scope.go:117] "RemoveContainer" containerID="5b57b21529abb5d954dbc09480117b7a6cd26b04714b66e14dfed0e747ec53e9"
Dec 08 03:43:31 addons-301052 kubelet[1497]: I1208 03:43:31.251269 1497 scope.go:117] "RemoveContainer" containerID="1d13255e7b443e24580f5511e84bc90b15d8b3280e461f696cac7a72e8b470ba"
Dec 08 03:43:37 addons-301052 kubelet[1497]: E1208 03:43:37.646698 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765165417646354989 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 08 03:43:37 addons-301052 kubelet[1497]: E1208 03:43:37.646722 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765165417646354989 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 08 03:43:47 addons-301052 kubelet[1497]: E1208 03:43:47.648783 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765165427648494508 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 08 03:43:47 addons-301052 kubelet[1497]: E1208 03:43:47.649154 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765165427648494508 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 08 03:43:57 addons-301052 kubelet[1497]: E1208 03:43:57.652188 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765165437651757259 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 08 03:43:57 addons-301052 kubelet[1497]: E1208 03:43:57.652280 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765165437651757259 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 08 03:44:07 addons-301052 kubelet[1497]: E1208 03:44:07.655363 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765165447654931941 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 08 03:44:07 addons-301052 kubelet[1497]: E1208 03:44:07.655401 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765165447654931941 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 08 03:44:17 addons-301052 kubelet[1497]: E1208 03:44:17.658556 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765165457658227065 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 08 03:44:17 addons-301052 kubelet[1497]: E1208 03:44:17.658595 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765165457658227065 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 08 03:44:27 addons-301052 kubelet[1497]: E1208 03:44:27.661247 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765165467660875864 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 08 03:44:27 addons-301052 kubelet[1497]: E1208 03:44:27.661296 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765165467660875864 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 08 03:44:30 addons-301052 kubelet[1497]: I1208 03:44:30.302309 1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-mn6gz" secret="" err="secret \"gcp-auth\" not found"
Dec 08 03:44:37 addons-301052 kubelet[1497]: E1208 03:44:37.664620 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765165477664181411 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 08 03:44:37 addons-301052 kubelet[1497]: E1208 03:44:37.664666 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765165477664181411 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 08 03:44:44 addons-301052 kubelet[1497]: I1208 03:44:44.302738 1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Dec 08 03:44:47 addons-301052 kubelet[1497]: E1208 03:44:47.666741 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765165487666394651 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 08 03:44:47 addons-301052 kubelet[1497]: E1208 03:44:47.666764 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765165487666394651 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 08 03:44:57 addons-301052 kubelet[1497]: E1208 03:44:57.670689 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765165497670027182 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 08 03:44:57 addons-301052 kubelet[1497]: E1208 03:44:57.670796 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765165497670027182 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 08 03:45:07 addons-301052 kubelet[1497]: E1208 03:45:07.673590 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765165507673232271 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 08 03:45:07 addons-301052 kubelet[1497]: E1208 03:45:07.673632 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765165507673232271 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 08 03:45:08 addons-301052 kubelet[1497]: I1208 03:45:08.916722 1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qvls\" (UniqueName: \"kubernetes.io/projected/ad54be60-4b07-4b6b-8c16-0adec3518a16-kube-api-access-8qvls\") pod \"hello-world-app-5d498dc89-sdslz\" (UID: \"ad54be60-4b07-4b6b-8c16-0adec3518a16\") " pod="default/hello-world-app-5d498dc89-sdslz"
==> storage-provisioner [709564618ae54f120db30996e41815d9eb651d09fafe7966c6d0727a3827f788] <==
W1208 03:44:44.657497 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1208 03:44:46.661034 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1208 03:44:46.666525 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1208 03:44:48.669202 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1208 03:44:48.676834 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1208 03:44:50.680476 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1208 03:44:50.685479 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1208 03:44:52.688542 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1208 03:44:52.696904 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1208 03:44:54.701326 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1208 03:44:54.706129 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1208 03:44:56.710623 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1208 03:44:56.718833 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1208 03:44:58.722135 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1208 03:44:58.726976 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1208 03:45:00.732184 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1208 03:45:00.740506 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1208 03:45:02.743290 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1208 03:45:02.748349 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1208 03:45:04.751897 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1208 03:45:04.758413 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1208 03:45:06.761821 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1208 03:45:06.766514 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1208 03:45:08.788778 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1208 03:45:08.805019 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-301052 -n addons-301052
helpers_test.go:269: (dbg) Run: kubectl --context addons-301052 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-sdslz ingress-nginx-admission-create-ckkz4 ingress-nginx-admission-patch-qdld5
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context addons-301052 describe pod hello-world-app-5d498dc89-sdslz ingress-nginx-admission-create-ckkz4 ingress-nginx-admission-patch-qdld5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-301052 describe pod hello-world-app-5d498dc89-sdslz ingress-nginx-admission-create-ckkz4 ingress-nginx-admission-patch-qdld5: exit status 1 (73.131644ms)
-- stdout --
Name: hello-world-app-5d498dc89-sdslz
Namespace: default
Priority: 0
Service Account: default
Node: addons-301052/192.168.39.103
Start Time: Mon, 08 Dec 2025 03:45:08 +0000
Labels: app=hello-world-app
pod-template-hash=5d498dc89
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-world-app-5d498dc89
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8qvls (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-8qvls:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2s default-scheduler Successfully assigned default/hello-world-app-5d498dc89-sdslz to addons-301052
Normal Pulling 1s kubelet Pulling image "docker.io/kicbase/echo-server:1.0"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-ckkz4" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-qdld5" not found
** /stderr **
helpers_test.go:287: kubectl --context addons-301052 describe pod hello-world-app-5d498dc89-sdslz ingress-nginx-admission-create-ckkz4 ingress-nginx-admission-patch-qdld5: exit status 1
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-301052 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-301052 addons disable ingress-dns --alsologtostderr -v=1: (1.596543507s)
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-301052 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-301052 addons disable ingress --alsologtostderr -v=1: (7.659073809s)
--- FAIL: TestAddons/parallel/Ingress (156.51s)