=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run: kubectl --context addons-160421 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run: kubectl --context addons-160421 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run: kubectl --context addons-160421 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [680dd0bb-32c0-4828-b24e-4ab7a48348f6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [680dd0bb-32c0-4828-b24e-4ab7a48348f6] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.023274166s
I1108 09:13:01.003634 371695 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run: out/minikube-linux-amd64 -p addons-160421 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-160421 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.729631045s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run: kubectl --context addons-160421 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run: out/minikube-linux-amd64 -p addons-160421 ip
addons_test.go:299: (dbg) Run: nslookup hello-john.test 192.168.39.239
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-160421 -n addons-160421
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p addons-160421 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-160421 logs -n 25: (1.215185193s)
helpers_test.go:260: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ delete │ -p download-only-341231 │ download-only-341231 │ jenkins │ v1.37.0 │ 08 Nov 25 09:09 UTC │ 08 Nov 25 09:09 UTC │
│ start │ --download-only -p binary-mirror-691676 --alsologtostderr --binary-mirror http://127.0.0.1:43781 --driver=kvm2 --container-runtime=crio │ binary-mirror-691676 │ jenkins │ v1.37.0 │ 08 Nov 25 09:09 UTC │ │
│ delete │ -p binary-mirror-691676 │ binary-mirror-691676 │ jenkins │ v1.37.0 │ 08 Nov 25 09:09 UTC │ 08 Nov 25 09:09 UTC │
│ addons │ disable dashboard -p addons-160421 │ addons-160421 │ jenkins │ v1.37.0 │ 08 Nov 25 09:09 UTC │ │
│ addons │ enable dashboard -p addons-160421 │ addons-160421 │ jenkins │ v1.37.0 │ 08 Nov 25 09:09 UTC │ │
│ start │ -p addons-160421 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-160421 │ jenkins │ v1.37.0 │ 08 Nov 25 09:09 UTC │ 08 Nov 25 09:12 UTC │
│ addons │ addons-160421 addons disable volcano --alsologtostderr -v=1 │ addons-160421 │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │ 08 Nov 25 09:12 UTC │
│ addons │ addons-160421 addons disable gcp-auth --alsologtostderr -v=1 │ addons-160421 │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │ 08 Nov 25 09:12 UTC │
│ addons │ enable headlamp -p addons-160421 --alsologtostderr -v=1 │ addons-160421 │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │ 08 Nov 25 09:12 UTC │
│ addons │ addons-160421 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-160421 │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │ 08 Nov 25 09:12 UTC │
│ addons │ addons-160421 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-160421 │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │ 08 Nov 25 09:12 UTC │
│ ssh │ addons-160421 ssh cat /opt/local-path-provisioner/pvc-50f96a44-34b5-4055-b100-1f077f23a804_default_test-pvc/file1 │ addons-160421 │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │ 08 Nov 25 09:12 UTC │
│ addons │ addons-160421 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-160421 │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │ 08 Nov 25 09:13 UTC │
│ addons │ addons-160421 addons disable headlamp --alsologtostderr -v=1 │ addons-160421 │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │ 08 Nov 25 09:12 UTC │
│ addons │ addons-160421 addons disable metrics-server --alsologtostderr -v=1 │ addons-160421 │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │ 08 Nov 25 09:12 UTC │
│ ip │ addons-160421 ip │ addons-160421 │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │ 08 Nov 25 09:12 UTC │
│ addons │ addons-160421 addons disable registry --alsologtostderr -v=1 │ addons-160421 │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │ 08 Nov 25 09:12 UTC │
│ addons │ addons-160421 addons disable yakd --alsologtostderr -v=1 │ addons-160421 │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │ 08 Nov 25 09:12 UTC │
│ ssh │ addons-160421 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-160421 │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │ │
│ addons │ addons-160421 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-160421 │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │ 08 Nov 25 09:13 UTC │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-160421 │ addons-160421 │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │ 08 Nov 25 09:13 UTC │
│ addons │ addons-160421 addons disable registry-creds --alsologtostderr -v=1 │ addons-160421 │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │ 08 Nov 25 09:13 UTC │
│ addons │ addons-160421 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-160421 │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │ 08 Nov 25 09:13 UTC │
│ addons │ addons-160421 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-160421 │ jenkins │ v1.37.0 │ 08 Nov 25 09:13 UTC │ 08 Nov 25 09:13 UTC │
│ ip │ addons-160421 ip │ addons-160421 │ jenkins │ v1.37.0 │ 08 Nov 25 09:15 UTC │ 08 Nov 25 09:15 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/11/08 09:09:59
Running on machine: ubuntu-20-agent-8
Binary: Built with gc go1.24.6 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1108 09:09:59.030167 372364 out.go:360] Setting OutFile to fd 1 ...
I1108 09:09:59.030459 372364 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:09:59.030471 372364 out.go:374] Setting ErrFile to fd 2...
I1108 09:09:59.030476 372364 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:09:59.030699 372364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-367706/.minikube/bin
I1108 09:09:59.031293 372364 out.go:368] Setting JSON to false
I1108 09:09:59.032217 372364 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3143,"bootTime":1762589856,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1108 09:09:59.032335 372364 start.go:143] virtualization: kvm guest
I1108 09:09:59.034179 372364 out.go:179] * [addons-160421] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1108 09:09:59.035424 372364 out.go:179] - MINIKUBE_LOCATION=21865
I1108 09:09:59.035463 372364 notify.go:221] Checking for updates...
I1108 09:09:59.037899 372364 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1108 09:09:59.039224 372364 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21865-367706/kubeconfig
I1108 09:09:59.040722 372364 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-367706/.minikube
I1108 09:09:59.042242 372364 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1108 09:09:59.043469 372364 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1108 09:09:59.044739 372364 driver.go:422] Setting default libvirt URI to qemu:///system
I1108 09:09:59.075973 372364 out.go:179] * Using the kvm2 driver based on user configuration
I1108 09:09:59.077050 372364 start.go:309] selected driver: kvm2
I1108 09:09:59.077065 372364 start.go:930] validating driver "kvm2" against <nil>
I1108 09:09:59.077077 372364 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1108 09:09:59.077826 372364 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1108 09:09:59.078077 372364 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1108 09:09:59.078110 372364 cni.go:84] Creating CNI manager for ""
I1108 09:09:59.078153 372364 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1108 09:09:59.078164 372364 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1108 09:09:59.078200 372364 start.go:353] cluster config:
{Name:addons-160421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-160421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
I1108 09:09:59.078328 372364 iso.go:125] acquiring lock: {Name:mkb94ab64a34aaa7418be26f2cd05d27589e5cab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1108 09:09:59.079617 372364 out.go:179] * Starting "addons-160421" primary control-plane node in "addons-160421" cluster
I1108 09:09:59.080536 372364 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1108 09:09:59.080569 372364 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-367706/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
I1108 09:09:59.080580 372364 cache.go:59] Caching tarball of preloaded images
I1108 09:09:59.080660 372364 preload.go:233] Found /home/jenkins/minikube-integration/21865-367706/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1108 09:09:59.080670 372364 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
I1108 09:09:59.081003 372364 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/config.json ...
I1108 09:09:59.081026 372364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/config.json: {Name:mkc2606b68b92b6bd264349d1f8641b96cddbaff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1108 09:09:59.081176 372364 start.go:360] acquireMachinesLock for addons-160421: {Name:mk48bf25b82dcf786ab58e9691b61918392aa6b8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1108 09:09:59.081224 372364 start.go:364] duration metric: took 35.049µs to acquireMachinesLock for "addons-160421"
I1108 09:09:59.081241 372364 start.go:93] Provisioning new machine with config: &{Name:addons-160421 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-160421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I1108 09:09:59.081327 372364 start.go:125] createHost starting for "" (driver="kvm2")
I1108 09:09:59.083357 372364 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1108 09:09:59.083522 372364 start.go:159] libmachine.API.Create for "addons-160421" (driver="kvm2")
I1108 09:09:59.083548 372364 client.go:173] LocalClient.Create starting
I1108 09:09:59.083639 372364 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21865-367706/.minikube/certs/ca.pem
I1108 09:09:59.218865 372364 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21865-367706/.minikube/certs/cert.pem
I1108 09:09:59.276491 372364 main.go:143] libmachine: creating domain...
I1108 09:09:59.276513 372364 main.go:143] libmachine: creating network...
I1108 09:09:59.278190 372364 main.go:143] libmachine: found existing default network
I1108 09:09:59.278443 372364 main.go:143] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1108 09:09:59.279074 372364 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d20e20}
I1108 09:09:59.279185 372364 main.go:143] libmachine: defining private network:
<network>
<name>mk-addons-160421</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1108 09:09:59.285175 372364 main.go:143] libmachine: creating private network mk-addons-160421 192.168.39.0/24...
I1108 09:09:59.349620 372364 main.go:143] libmachine: private network mk-addons-160421 192.168.39.0/24 created
I1108 09:09:59.349967 372364 main.go:143] libmachine: <network>
<name>mk-addons-160421</name>
<uuid>64fb815d-55ef-4ead-8b6a-c92559ebbec4</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:6b:b2:da'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1108 09:09:59.350004 372364 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421 ...
I1108 09:09:59.350030 372364 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21865-367706/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso
I1108 09:09:59.350042 372364 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21865-367706/.minikube
I1108 09:09:59.350122 372364 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21865-367706/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21865-367706/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso...
I1108 09:09:59.637716 372364 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/id_rsa...
I1108 09:09:59.782026 372364 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/addons-160421.rawdisk...
I1108 09:09:59.782083 372364 main.go:143] libmachine: Writing magic tar header
I1108 09:09:59.782109 372364 main.go:143] libmachine: Writing SSH key tar header
I1108 09:09:59.782188 372364 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421 ...
I1108 09:09:59.782260 372364 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421
I1108 09:09:59.782298 372364 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421 (perms=drwx------)
I1108 09:09:59.782317 372364 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21865-367706/.minikube/machines
I1108 09:09:59.782329 372364 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21865-367706/.minikube/machines (perms=drwxr-xr-x)
I1108 09:09:59.782343 372364 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21865-367706/.minikube
I1108 09:09:59.782352 372364 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21865-367706/.minikube (perms=drwxr-xr-x)
I1108 09:09:59.782370 372364 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21865-367706
I1108 09:09:59.782381 372364 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21865-367706 (perms=drwxrwxr-x)
I1108 09:09:59.782391 372364 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1108 09:09:59.782401 372364 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1108 09:09:59.782409 372364 main.go:143] libmachine: checking permissions on dir: /home/jenkins
I1108 09:09:59.782419 372364 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1108 09:09:59.782430 372364 main.go:143] libmachine: checking permissions on dir: /home
I1108 09:09:59.782443 372364 main.go:143] libmachine: skipping /home - not owner
I1108 09:09:59.782451 372364 main.go:143] libmachine: defining domain...
I1108 09:09:59.783897 372364 main.go:143] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-160421</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/addons-160421.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-160421'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1108 09:09:59.791220 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:56:7e:8e in network default
I1108 09:09:59.791878 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:09:59.791901 372364 main.go:143] libmachine: starting domain...
I1108 09:09:59.791905 372364 main.go:143] libmachine: ensuring networks are active...
I1108 09:09:59.792778 372364 main.go:143] libmachine: Ensuring network default is active
I1108 09:09:59.793220 372364 main.go:143] libmachine: Ensuring network mk-addons-160421 is active
I1108 09:09:59.793925 372364 main.go:143] libmachine: getting domain XML...
I1108 09:09:59.795054 372364 main.go:143] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-160421</name>
<uuid>98a3416f-2446-4b90-b3a3-e4dcbf116111</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/addons-160421.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:45:c0:76'/>
<source network='mk-addons-160421'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:56:7e:8e'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1108 09:10:01.120207 372364 main.go:143] libmachine: waiting for domain to start...
I1108 09:10:01.121498 372364 main.go:143] libmachine: domain is now running
I1108 09:10:01.121514 372364 main.go:143] libmachine: waiting for IP...
I1108 09:10:01.122230 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:01.122666 372364 main.go:143] libmachine: no network interface addresses found for domain addons-160421 (source=lease)
I1108 09:10:01.122677 372364 main.go:143] libmachine: trying to list again with source=arp
I1108 09:10:01.122931 372364 main.go:143] libmachine: unable to find current IP address of domain addons-160421 in network mk-addons-160421 (interfaces detected: [])
I1108 09:10:01.122980 372364 retry.go:31] will retry after 197.567236ms: waiting for domain to come up
I1108 09:10:01.322715 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:01.323256 372364 main.go:143] libmachine: no network interface addresses found for domain addons-160421 (source=lease)
I1108 09:10:01.323269 372364 main.go:143] libmachine: trying to list again with source=arp
I1108 09:10:01.323533 372364 main.go:143] libmachine: unable to find current IP address of domain addons-160421 in network mk-addons-160421 (interfaces detected: [])
I1108 09:10:01.323573 372364 retry.go:31] will retry after 333.995564ms: waiting for domain to come up
I1108 09:10:01.659496 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:01.660035 372364 main.go:143] libmachine: no network interface addresses found for domain addons-160421 (source=lease)
I1108 09:10:01.660056 372364 main.go:143] libmachine: trying to list again with source=arp
I1108 09:10:01.660409 372364 main.go:143] libmachine: unable to find current IP address of domain addons-160421 in network mk-addons-160421 (interfaces detected: [])
I1108 09:10:01.660487 372364 retry.go:31] will retry after 430.90022ms: waiting for domain to come up
I1108 09:10:02.093304 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:02.093856 372364 main.go:143] libmachine: no network interface addresses found for domain addons-160421 (source=lease)
I1108 09:10:02.093888 372364 main.go:143] libmachine: trying to list again with source=arp
I1108 09:10:02.094157 372364 main.go:143] libmachine: unable to find current IP address of domain addons-160421 in network mk-addons-160421 (interfaces detected: [])
I1108 09:10:02.094193 372364 retry.go:31] will retry after 458.100269ms: waiting for domain to come up
I1108 09:10:02.553967 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:02.554546 372364 main.go:143] libmachine: no network interface addresses found for domain addons-160421 (source=lease)
I1108 09:10:02.554564 372364 main.go:143] libmachine: trying to list again with source=arp
I1108 09:10:02.554865 372364 main.go:143] libmachine: unable to find current IP address of domain addons-160421 in network mk-addons-160421 (interfaces detected: [])
I1108 09:10:02.554907 372364 retry.go:31] will retry after 731.965569ms: waiting for domain to come up
I1108 09:10:03.289024 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:03.289661 372364 main.go:143] libmachine: no network interface addresses found for domain addons-160421 (source=lease)
I1108 09:10:03.289678 372364 main.go:143] libmachine: trying to list again with source=arp
I1108 09:10:03.289995 372364 main.go:143] libmachine: unable to find current IP address of domain addons-160421 in network mk-addons-160421 (interfaces detected: [])
I1108 09:10:03.290037 372364 retry.go:31] will retry after 762.00035ms: waiting for domain to come up
I1108 09:10:04.054064 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:04.054552 372364 main.go:143] libmachine: no network interface addresses found for domain addons-160421 (source=lease)
I1108 09:10:04.054568 372364 main.go:143] libmachine: trying to list again with source=arp
I1108 09:10:04.054803 372364 main.go:143] libmachine: unable to find current IP address of domain addons-160421 in network mk-addons-160421 (interfaces detected: [])
I1108 09:10:04.054841 372364 retry.go:31] will retry after 1.124005714s: waiting for domain to come up
I1108 09:10:05.180295 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:05.180891 372364 main.go:143] libmachine: no network interface addresses found for domain addons-160421 (source=lease)
I1108 09:10:05.180912 372364 main.go:143] libmachine: trying to list again with source=arp
I1108 09:10:05.181147 372364 main.go:143] libmachine: unable to find current IP address of domain addons-160421 in network mk-addons-160421 (interfaces detected: [])
I1108 09:10:05.181194 372364 retry.go:31] will retry after 1.282283885s: waiting for domain to come up
I1108 09:10:06.465580 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:06.466195 372364 main.go:143] libmachine: no network interface addresses found for domain addons-160421 (source=lease)
I1108 09:10:06.466209 372364 main.go:143] libmachine: trying to list again with source=arp
I1108 09:10:06.466467 372364 main.go:143] libmachine: unable to find current IP address of domain addons-160421 in network mk-addons-160421 (interfaces detected: [])
I1108 09:10:06.466503 372364 retry.go:31] will retry after 1.456321107s: waiting for domain to come up
I1108 09:10:07.925296 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:07.925862 372364 main.go:143] libmachine: no network interface addresses found for domain addons-160421 (source=lease)
I1108 09:10:07.925886 372364 main.go:143] libmachine: trying to list again with source=arp
I1108 09:10:07.926174 372364 main.go:143] libmachine: unable to find current IP address of domain addons-160421 in network mk-addons-160421 (interfaces detected: [])
I1108 09:10:07.926218 372364 retry.go:31] will retry after 1.861121785s: waiting for domain to come up
I1108 09:10:09.789287 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:09.789987 372364 main.go:143] libmachine: no network interface addresses found for domain addons-160421 (source=lease)
I1108 09:10:09.790010 372364 main.go:143] libmachine: trying to list again with source=arp
I1108 09:10:09.790343 372364 main.go:143] libmachine: unable to find current IP address of domain addons-160421 in network mk-addons-160421 (interfaces detected: [])
I1108 09:10:09.790391 372364 retry.go:31] will retry after 2.276588447s: waiting for domain to come up
I1108 09:10:12.069744 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:12.070285 372364 main.go:143] libmachine: no network interface addresses found for domain addons-160421 (source=lease)
I1108 09:10:12.070303 372364 main.go:143] libmachine: trying to list again with source=arp
I1108 09:10:12.070562 372364 main.go:143] libmachine: unable to find current IP address of domain addons-160421 in network mk-addons-160421 (interfaces detected: [])
I1108 09:10:12.070597 372364 retry.go:31] will retry after 2.652801335s: waiting for domain to come up
I1108 09:10:14.724715 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:14.725359 372364 main.go:143] libmachine: domain addons-160421 has current primary IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:14.725380 372364 main.go:143] libmachine: found domain IP: 192.168.39.239
I1108 09:10:14.725388 372364 main.go:143] libmachine: reserving static IP address...
I1108 09:10:14.725814 372364 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-160421", mac: "52:54:00:45:c0:76", ip: "192.168.39.239"} in network mk-addons-160421
I1108 09:10:14.920014 372364 main.go:143] libmachine: reserved static IP address 192.168.39.239 for domain addons-160421
I1108 09:10:14.920059 372364 main.go:143] libmachine: waiting for SSH...
I1108 09:10:14.920067 372364 main.go:143] libmachine: Getting to WaitForSSH function...
I1108 09:10:14.923193 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:14.923644 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:minikube Clientid:01:52:54:00:45:c0:76}
I1108 09:10:14.923671 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:14.923880 372364 main.go:143] libmachine: Using SSH client type: native
I1108 09:10:14.924092 372364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil> [] 0s} 192.168.39.239 22 <nil> <nil>}
I1108 09:10:14.924102 372364 main.go:143] libmachine: About to run SSH command:
exit 0
I1108 09:10:15.024851 372364 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1108 09:10:15.025238 372364 main.go:143] libmachine: domain creation complete
I1108 09:10:15.026784 372364 machine.go:94] provisionDockerMachine start ...
I1108 09:10:15.029120 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:15.029516 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:15.029575 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:15.029781 372364 main.go:143] libmachine: Using SSH client type: native
I1108 09:10:15.029989 372364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil> [] 0s} 192.168.39.239 22 <nil> <nil>}
I1108 09:10:15.030000 372364 main.go:143] libmachine: About to run SSH command:
hostname
I1108 09:10:15.129559 372364 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
I1108 09:10:15.129593 372364 buildroot.go:166] provisioning hostname "addons-160421"
I1108 09:10:15.132271 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:15.132652 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:15.132691 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:15.132898 372364 main.go:143] libmachine: Using SSH client type: native
I1108 09:10:15.133092 372364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil> [] 0s} 192.168.39.239 22 <nil> <nil>}
I1108 09:10:15.133103 372364 main.go:143] libmachine: About to run SSH command:
sudo hostname addons-160421 && echo "addons-160421" | sudo tee /etc/hostname
I1108 09:10:15.249938 372364 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-160421
I1108 09:10:15.253175 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:15.253666 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:15.253693 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:15.253941 372364 main.go:143] libmachine: Using SSH client type: native
I1108 09:10:15.254155 372364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil> [] 0s} 192.168.39.239 22 <nil> <nil>}
I1108 09:10:15.254170 372364 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-160421' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-160421/g' /etc/hosts;
else
echo '127.0.1.1 addons-160421' | sudo tee -a /etc/hosts;
fi
fi
I1108 09:10:15.363608 372364 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1108 09:10:15.363643 372364 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21865-367706/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-367706/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-367706/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-367706/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-367706/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-367706/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-367706/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-367706/.minikube}
I1108 09:10:15.363669 372364 buildroot.go:174] setting up certificates
I1108 09:10:15.363681 372364 provision.go:84] configureAuth start
I1108 09:10:15.366768 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:15.367177 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:15.367200 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:15.369649 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:15.369996 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:15.370014 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:15.370150 372364 provision.go:143] copyHostCerts
I1108 09:10:15.370212 372364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-367706/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-367706/.minikube/key.pem (1675 bytes)
I1108 09:10:15.370344 372364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-367706/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-367706/.minikube/ca.pem (1078 bytes)
I1108 09:10:15.370406 372364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-367706/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-367706/.minikube/cert.pem (1123 bytes)
I1108 09:10:15.370453 372364 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-367706/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-367706/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-367706/.minikube/certs/ca-key.pem org=jenkins.addons-160421 san=[127.0.0.1 192.168.39.239 addons-160421 localhost minikube]
I1108 09:10:15.598429 372364 provision.go:177] copyRemoteCerts
I1108 09:10:15.598490 372364 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1108 09:10:15.601618 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:15.602131 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:15.602172 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:15.602423 372364 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/id_rsa Username:docker}
I1108 09:10:15.708822 372364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-367706/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1108 09:10:15.737962 372364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-367706/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1108 09:10:15.766138 372364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-367706/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1108 09:10:15.794400 372364 provision.go:87] duration metric: took 430.703707ms to configureAuth
I1108 09:10:15.794435 372364 buildroot.go:189] setting minikube options for container-runtime
I1108 09:10:15.794642 372364 config.go:182] Loaded profile config "addons-160421": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:10:15.797314 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:15.797705 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:15.797734 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:15.797919 372364 main.go:143] libmachine: Using SSH client type: native
I1108 09:10:15.798144 372364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil> [] 0s} 192.168.39.239 22 <nil> <nil>}
I1108 09:10:15.798169 372364 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1108 09:10:16.035207 372364 main.go:143] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1108 09:10:16.035240 372364 machine.go:97] duration metric: took 1.008435806s to provisionDockerMachine
I1108 09:10:16.035282 372364 client.go:176] duration metric: took 16.951726713s to LocalClient.Create
I1108 09:10:16.035309 372364 start.go:167] duration metric: took 16.951786146s to libmachine.API.Create "addons-160421"
I1108 09:10:16.035321 372364 start.go:293] postStartSetup for "addons-160421" (driver="kvm2")
I1108 09:10:16.035334 372364 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1108 09:10:16.035438 372364 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1108 09:10:16.038152 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:16.038640 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:16.038662 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:16.038834 372364 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/id_rsa Username:docker}
I1108 09:10:16.119812 372364 ssh_runner.go:195] Run: cat /etc/os-release
I1108 09:10:16.124293 372364 info.go:137] Remote host: Buildroot 2025.02
I1108 09:10:16.124321 372364 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-367706/.minikube/addons for local assets ...
I1108 09:10:16.124396 372364 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-367706/.minikube/files for local assets ...
I1108 09:10:16.124432 372364 start.go:296] duration metric: took 89.103703ms for postStartSetup
I1108 09:10:16.127534 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:16.127939 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:16.127964 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:16.128160 372364 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/config.json ...
I1108 09:10:16.128337 372364 start.go:128] duration metric: took 17.046998961s to createHost
I1108 09:10:16.130706 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:16.131005 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:16.131023 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:16.131174 372364 main.go:143] libmachine: Using SSH client type: native
I1108 09:10:16.131404 372364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil> [] 0s} 192.168.39.239 22 <nil> <nil>}
I1108 09:10:16.131416 372364 main.go:143] libmachine: About to run SSH command:
date +%s.%N
I1108 09:10:16.233564 372364 main.go:143] libmachine: SSH cmd err, output: <nil>: 1762593016.191685563
I1108 09:10:16.233596 372364 fix.go:216] guest clock: 1762593016.191685563
I1108 09:10:16.233608 372364 fix.go:229] Guest: 2025-11-08 09:10:16.191685563 +0000 UTC Remote: 2025-11-08 09:10:16.128348694 +0000 UTC m=+17.146517969 (delta=63.336869ms)
I1108 09:10:16.233629 372364 fix.go:200] guest clock delta is within tolerance: 63.336869ms
I1108 09:10:16.233634 372364 start.go:83] releasing machines lock for "addons-160421", held for 17.15240102s
I1108 09:10:16.236720 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:16.237231 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:16.237288 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:16.237831 372364 ssh_runner.go:195] Run: cat /version.json
I1108 09:10:16.237923 372364 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1108 09:10:16.240737 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:16.241091 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:16.241117 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:16.241129 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:16.241280 372364 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/id_rsa Username:docker}
I1108 09:10:16.241786 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:16.241817 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:16.242010 372364 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/id_rsa Username:docker}
I1108 09:10:16.321937 372364 ssh_runner.go:195] Run: systemctl --version
I1108 09:10:16.356684 372364 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1108 09:10:16.512071 372364 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1108 09:10:16.518960 372364 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1108 09:10:16.519030 372364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1108 09:10:16.539501 372364 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1108 09:10:16.539564 372364 start.go:496] detecting cgroup driver to use...
I1108 09:10:16.539653 372364 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1108 09:10:16.558346 372364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1108 09:10:16.580898 372364 docker.go:218] disabling cri-docker service (if available) ...
I1108 09:10:16.580972 372364 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1108 09:10:16.602796 372364 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1108 09:10:16.619009 372364 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1108 09:10:16.772422 372364 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1108 09:10:16.977625 372364 docker.go:234] disabling docker service ...
I1108 09:10:16.977699 372364 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1108 09:10:16.993704 372364 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1108 09:10:17.008203 372364 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1108 09:10:17.161130 372364 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1108 09:10:17.308709 372364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1108 09:10:17.324122 372364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1108 09:10:17.346475 372364 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1108 09:10:17.346540 372364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1108 09:10:17.358486 372364 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1108 09:10:17.358554 372364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1108 09:10:17.370321 372364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1108 09:10:17.382269 372364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1108 09:10:17.394091 372364 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1108 09:10:17.407058 372364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1108 09:10:17.418861 372364 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1108 09:10:17.438439 372364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1108 09:10:17.449931 372364 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1108 09:10:17.460070 372364 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1108 09:10:17.460135 372364 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1108 09:10:17.486663 372364 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1108 09:10:17.500201 372364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1108 09:10:17.637961 372364 ssh_runner.go:195] Run: sudo systemctl restart crio
I1108 09:10:17.912187 372364 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1108 09:10:17.912303 372364 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1108 09:10:17.917820 372364 start.go:564] Will wait 60s for crictl version
I1108 09:10:17.917886 372364 ssh_runner.go:195] Run: which crictl
I1108 09:10:17.922098 372364 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1108 09:10:17.961125 372364 start.go:580] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1108 09:10:17.961234 372364 ssh_runner.go:195] Run: crio --version
I1108 09:10:17.989517 372364 ssh_runner.go:195] Run: crio --version
I1108 09:10:18.020508 372364 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
I1108 09:10:18.025062 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:18.025568 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:18.025595 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:18.025859 372364 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1108 09:10:18.030696 372364 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1108 09:10:18.048328 372364 kubeadm.go:884] updating cluster {Name:addons-160421 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-160421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1108 09:10:18.048443 372364 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1108 09:10:18.048487 372364 ssh_runner.go:195] Run: sudo crictl images --output json
I1108 09:10:18.083504 372364 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
I1108 09:10:18.083593 372364 ssh_runner.go:195] Run: which lz4
I1108 09:10:18.087935 372364 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1108 09:10:18.092689 372364 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1108 09:10:18.092726 372364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-367706/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
I1108 09:10:19.389506 372364 crio.go:462] duration metric: took 1.301611187s to copy over tarball
I1108 09:10:19.389592 372364 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1108 09:10:20.926448 372364 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.536816977s)
I1108 09:10:20.926491 372364 crio.go:469] duration metric: took 1.536949672s to extract the tarball
I1108 09:10:20.926502 372364 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1108 09:10:20.968489 372364 ssh_runner.go:195] Run: sudo crictl images --output json
I1108 09:10:21.015834 372364 crio.go:514] all images are preloaded for cri-o runtime.
I1108 09:10:21.015861 372364 cache_images.go:86] Images are preloaded, skipping loading
I1108 09:10:21.015874 372364 kubeadm.go:935] updating node { 192.168.39.239 8443 v1.34.1 crio true true} ...
I1108 09:10:21.016010 372364 kubeadm.go:947] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-160421 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.239
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:addons-160421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1108 09:10:21.016096 372364 ssh_runner.go:195] Run: crio config
I1108 09:10:21.065702 372364 cni.go:84] Creating CNI manager for ""
I1108 09:10:21.065723 372364 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1108 09:10:21.065740 372364 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1108 09:10:21.065771 372364 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.239 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-160421 NodeName:addons-160421 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.239"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.239 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1108 09:10:21.065916 372364 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.239
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-160421"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.239"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.239"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1108 09:10:21.065986 372364 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1108 09:10:21.078056 372364 binaries.go:44] Found k8s binaries, skipping transfer
I1108 09:10:21.078155 372364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1108 09:10:21.089650 372364 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
I1108 09:10:21.109342 372364 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1108 09:10:21.129551 372364 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
I1108 09:10:21.150123 372364 ssh_runner.go:195] Run: grep 192.168.39.239 control-plane.minikube.internal$ /etc/hosts
I1108 09:10:21.154318 372364 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.239 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1108 09:10:21.168573 372364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1108 09:10:21.306878 372364 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1108 09:10:21.342326 372364 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421 for IP: 192.168.39.239
I1108 09:10:21.342351 372364 certs.go:195] generating shared ca certs ...
I1108 09:10:21.342368 372364 certs.go:227] acquiring lock for ca certs: {Name:mk553df53871187b1d6f6320d84749ca67b24b23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1108 09:10:21.342520 372364 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-367706/.minikube/ca.key
I1108 09:10:21.814334 372364 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-367706/.minikube/ca.crt ...
I1108 09:10:21.814367 372364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-367706/.minikube/ca.crt: {Name:mk8b7bdc0d7ee947bb617d74b2c11ca1a7bae6d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1108 09:10:21.814556 372364 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-367706/.minikube/ca.key ...
I1108 09:10:21.814569 372364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-367706/.minikube/ca.key: {Name:mk0a2fa0e2833517d05df9e249be8a9c43ea3955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1108 09:10:21.814645 372364 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-367706/.minikube/proxy-client-ca.key
I1108 09:10:21.963880 372364 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-367706/.minikube/proxy-client-ca.crt ...
I1108 09:10:21.963915 372364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-367706/.minikube/proxy-client-ca.crt: {Name:mk903ee2499414c70ab92da6b47184d617a73e16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1108 09:10:21.964097 372364 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-367706/.minikube/proxy-client-ca.key ...
I1108 09:10:21.964108 372364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-367706/.minikube/proxy-client-ca.key: {Name:mk7a3e59937ea9b7baac6467546022a6c1b853f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1108 09:10:21.964179 372364 certs.go:257] generating profile certs ...
I1108 09:10:21.964237 372364 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/client.key
I1108 09:10:21.964264 372364 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/client.crt with IP's: []
I1108 09:10:22.167456 372364 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/client.crt ...
I1108 09:10:22.167489 372364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/client.crt: {Name:mk0920ae07c168bb98ffd3e560478048d5fe6f37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1108 09:10:22.167679 372364 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/client.key ...
I1108 09:10:22.167691 372364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/client.key: {Name:mkf7cd75587aededf36d9eeff9c4c7d862c2d968 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1108 09:10:22.167766 372364 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/apiserver.key.3d5e80d1
I1108 09:10:22.167786 372364 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/apiserver.crt.3d5e80d1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.239]
I1108 09:10:22.337327 372364 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/apiserver.crt.3d5e80d1 ...
I1108 09:10:22.337360 372364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/apiserver.crt.3d5e80d1: {Name:mk320a27a76969adb3972a0471be6aab1992343b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1108 09:10:22.337546 372364 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/apiserver.key.3d5e80d1 ...
I1108 09:10:22.337562 372364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/apiserver.key.3d5e80d1: {Name:mk7e37fa769897dce96450ba7c1541344454880e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1108 09:10:22.337638 372364 certs.go:382] copying /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/apiserver.crt.3d5e80d1 -> /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/apiserver.crt
I1108 09:10:22.337707 372364 certs.go:386] copying /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/apiserver.key.3d5e80d1 -> /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/apiserver.key
I1108 09:10:22.337754 372364 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/proxy-client.key
I1108 09:10:22.337773 372364 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/proxy-client.crt with IP's: []
I1108 09:10:22.599567 372364 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/proxy-client.crt ...
I1108 09:10:22.599605 372364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/proxy-client.crt: {Name:mkd6dfa3892aaf18086a70a2c0b476fd678f119d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1108 09:10:22.599825 372364 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/proxy-client.key ...
I1108 09:10:22.599846 372364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/proxy-client.key: {Name:mkdf9a78379549a29906f9541459f8f8ec831baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1108 09:10:22.600065 372364 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-367706/.minikube/certs/ca-key.pem (1679 bytes)
I1108 09:10:22.600128 372364 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-367706/.minikube/certs/ca.pem (1078 bytes)
I1108 09:10:22.600170 372364 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-367706/.minikube/certs/cert.pem (1123 bytes)
I1108 09:10:22.600206 372364 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-367706/.minikube/certs/key.pem (1675 bytes)
I1108 09:10:22.600796 372364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-367706/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1108 09:10:22.630554 372364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-367706/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1108 09:10:22.658524 372364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-367706/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1108 09:10:22.687702 372364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-367706/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1108 09:10:22.718913 372364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1108 09:10:22.749275 372364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1108 09:10:22.777498 372364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1108 09:10:22.807810 372364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-367706/.minikube/profiles/addons-160421/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1108 09:10:22.837606 372364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-367706/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1108 09:10:22.866643 372364 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1108 09:10:22.886599 372364 ssh_runner.go:195] Run: openssl version
I1108 09:10:22.892944 372364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1108 09:10:22.908438 372364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1108 09:10:22.914493 372364 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 8 09:10 /usr/share/ca-certificates/minikubeCA.pem
I1108 09:10:22.914571 372364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1108 09:10:22.923139 372364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1108 09:10:22.936824 372364 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1108 09:10:22.942522 372364 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1108 09:10:22.942588 372364 kubeadm.go:401] StartCluster: {Name:addons-160421 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-160421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1108 09:10:22.942662 372364 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1108 09:10:22.942716 372364 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1108 09:10:22.993437 372364 cri.go:89] found id: ""
I1108 09:10:22.993569 372364 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1108 09:10:23.005899 372364 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1108 09:10:23.017340 372364 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1108 09:10:23.028910 372364 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1108 09:10:23.028928 372364 kubeadm.go:158] found existing configuration files:
I1108 09:10:23.028973 372364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1108 09:10:23.039353 372364 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1108 09:10:23.039416 372364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1108 09:10:23.050363 372364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1108 09:10:23.061205 372364 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1108 09:10:23.061298 372364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1108 09:10:23.072729 372364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1108 09:10:23.085447 372364 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1108 09:10:23.085517 372364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1108 09:10:23.099176 372364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1108 09:10:23.111836 372364 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1108 09:10:23.111892 372364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1108 09:10:23.126120 372364 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1108 09:10:23.286908 372364 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1108 09:10:34.677659 372364 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
I1108 09:10:34.677764 372364 kubeadm.go:319] [preflight] Running pre-flight checks
I1108 09:10:34.677909 372364 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1108 09:10:34.678082 372364 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1108 09:10:34.678178 372364 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1108 09:10:34.678290 372364 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1108 09:10:34.679855 372364 out.go:252] - Generating certificates and keys ...
I1108 09:10:34.679953 372364 kubeadm.go:319] [certs] Using existing ca certificate authority
I1108 09:10:34.680064 372364 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1108 09:10:34.680163 372364 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1108 09:10:34.680255 372364 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1108 09:10:34.680344 372364 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1108 09:10:34.680416 372364 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1108 09:10:34.680518 372364 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1108 09:10:34.680709 372364 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-160421 localhost] and IPs [192.168.39.239 127.0.0.1 ::1]
I1108 09:10:34.680783 372364 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1108 09:10:34.680952 372364 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-160421 localhost] and IPs [192.168.39.239 127.0.0.1 ::1]
I1108 09:10:34.681057 372364 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1108 09:10:34.681165 372364 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1108 09:10:34.681240 372364 kubeadm.go:319] [certs] Generating "sa" key and public key
I1108 09:10:34.681350 372364 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1108 09:10:34.681409 372364 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1108 09:10:34.681499 372364 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1108 09:10:34.681577 372364 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1108 09:10:34.681658 372364 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1108 09:10:34.681730 372364 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1108 09:10:34.681854 372364 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1108 09:10:34.681960 372364 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1108 09:10:34.683396 372364 out.go:252] - Booting up control plane ...
I1108 09:10:34.683507 372364 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1108 09:10:34.683582 372364 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1108 09:10:34.683654 372364 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1108 09:10:34.683775 372364 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1108 09:10:34.683862 372364 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1108 09:10:34.683996 372364 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1108 09:10:34.684100 372364 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1108 09:10:34.684137 372364 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1108 09:10:34.684304 372364 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1108 09:10:34.684418 372364 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1108 09:10:34.684517 372364 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001584291s
I1108 09:10:34.684642 372364 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1108 09:10:34.684764 372364 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.239:8443/livez
I1108 09:10:34.684847 372364 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1108 09:10:34.684961 372364 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1108 09:10:34.685049 372364 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.486619297s
I1108 09:10:34.685147 372364 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.394520582s
I1108 09:10:34.685238 372364 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502558609s
I1108 09:10:34.685392 372364 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1108 09:10:34.685540 372364 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1108 09:10:34.685596 372364 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1108 09:10:34.685782 372364 kubeadm.go:319] [mark-control-plane] Marking the node addons-160421 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1108 09:10:34.685872 372364 kubeadm.go:319] [bootstrap-token] Using token: d11nk0.503tvc1e5ywngrhz
I1108 09:10:34.687315 372364 out.go:252] - Configuring RBAC rules ...
I1108 09:10:34.687424 372364 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1108 09:10:34.687530 372364 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1108 09:10:34.687732 372364 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1108 09:10:34.687871 372364 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1108 09:10:34.688008 372364 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1108 09:10:34.688139 372364 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1108 09:10:34.688261 372364 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1108 09:10:34.688307 372364 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1108 09:10:34.688344 372364 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1108 09:10:34.688350 372364 kubeadm.go:319]
I1108 09:10:34.688414 372364 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1108 09:10:34.688431 372364 kubeadm.go:319]
I1108 09:10:34.688531 372364 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1108 09:10:34.688542 372364 kubeadm.go:319]
I1108 09:10:34.688563 372364 kubeadm.go:319] mkdir -p $HOME/.kube
I1108 09:10:34.688612 372364 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1108 09:10:34.688671 372364 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1108 09:10:34.688681 372364 kubeadm.go:319]
I1108 09:10:34.688755 372364 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1108 09:10:34.688761 372364 kubeadm.go:319]
I1108 09:10:34.688805 372364 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1108 09:10:34.688827 372364 kubeadm.go:319]
I1108 09:10:34.688878 372364 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1108 09:10:34.688971 372364 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1108 09:10:34.689041 372364 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1108 09:10:34.689054 372364 kubeadm.go:319]
I1108 09:10:34.689179 372364 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1108 09:10:34.689265 372364 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1108 09:10:34.689272 372364 kubeadm.go:319]
I1108 09:10:34.689335 372364 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token d11nk0.503tvc1e5ywngrhz \
I1108 09:10:34.689440 372364 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:2668eced26baf6bbb43793752f787fda387e474a49d43e8872266a22d81391ab \
I1108 09:10:34.689460 372364 kubeadm.go:319] --control-plane
I1108 09:10:34.689463 372364 kubeadm.go:319]
I1108 09:10:34.689549 372364 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1108 09:10:34.689563 372364 kubeadm.go:319]
I1108 09:10:34.689633 372364 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token d11nk0.503tvc1e5ywngrhz \
I1108 09:10:34.689758 372364 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:2668eced26baf6bbb43793752f787fda387e474a49d43e8872266a22d81391ab
I1108 09:10:34.689782 372364 cni.go:84] Creating CNI manager for ""
I1108 09:10:34.689792 372364 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1108 09:10:34.691035 372364 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1108 09:10:34.692220 372364 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1108 09:10:34.707438 372364 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1108 09:10:34.730657 372364 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1108 09:10:34.730769 372364 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-160421 minikube.k8s.io/updated_at=2025_11_08T09_10_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=addons-160421 minikube.k8s.io/primary=true
I1108 09:10:34.730795 372364 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1108 09:10:34.852264 372364 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1108 09:10:34.905498 372364 ops.go:34] apiserver oom_adj: -16
I1108 09:10:35.352389 372364 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1108 09:10:35.852645 372364 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1108 09:10:36.352606 372364 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1108 09:10:36.853148 372364 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1108 09:10:37.352634 372364 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1108 09:10:37.852471 372364 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1108 09:10:38.353059 372364 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1108 09:10:38.853260 372364 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1108 09:10:38.965697 372364 kubeadm.go:1114] duration metric: took 4.235019615s to wait for elevateKubeSystemPrivileges
I1108 09:10:38.965753 372364 kubeadm.go:403] duration metric: took 16.023168542s to StartCluster
I1108 09:10:38.965785 372364 settings.go:142] acquiring lock: {Name:mk120775f8ee9680fac283e25751bf6fd976d6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1108 09:10:38.965980 372364 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/21865-367706/kubeconfig
I1108 09:10:38.966586 372364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-367706/kubeconfig: {Name:mk230fbd697e45eec8dc47813f8c0b604d5fb55d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1108 09:10:38.966937 372364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1108 09:10:38.966933 372364 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I1108 09:10:38.966970 372364 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1108 09:10:38.967090 372364 addons.go:70] Setting yakd=true in profile "addons-160421"
I1108 09:10:38.967107 372364 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-160421"
I1108 09:10:38.967125 372364 addons.go:239] Setting addon yakd=true in "addons-160421"
I1108 09:10:38.967123 372364 addons.go:70] Setting metrics-server=true in profile "addons-160421"
I1108 09:10:38.967139 372364 addons.go:70] Setting gcp-auth=true in profile "addons-160421"
I1108 09:10:38.967146 372364 addons.go:239] Setting addon metrics-server=true in "addons-160421"
I1108 09:10:38.967146 372364 config.go:182] Loaded profile config "addons-160421": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:10:38.967155 372364 host.go:66] Checking if "addons-160421" exists ...
I1108 09:10:38.967161 372364 addons.go:70] Setting storage-provisioner=true in profile "addons-160421"
I1108 09:10:38.967181 372364 mustload.go:66] Loading cluster: addons-160421
I1108 09:10:38.967189 372364 addons.go:239] Setting addon storage-provisioner=true in "addons-160421"
I1108 09:10:38.967128 372364 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-160421"
I1108 09:10:38.967193 372364 addons.go:70] Setting ingress-dns=true in profile "addons-160421"
I1108 09:10:38.967204 372364 addons.go:239] Setting addon ingress-dns=true in "addons-160421"
I1108 09:10:38.967212 372364 host.go:66] Checking if "addons-160421" exists ...
I1108 09:10:38.967226 372364 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-160421"
I1108 09:10:38.967233 372364 host.go:66] Checking if "addons-160421" exists ...
I1108 09:10:38.967240 372364 host.go:66] Checking if "addons-160421" exists ...
I1108 09:10:38.967242 372364 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-160421"
I1108 09:10:38.967360 372364 config.go:182] Loaded profile config "addons-160421": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:10:38.967912 372364 addons.go:70] Setting volumesnapshots=true in profile "addons-160421"
I1108 09:10:38.967958 372364 addons.go:239] Setting addon volumesnapshots=true in "addons-160421"
I1108 09:10:38.967191 372364 addons.go:70] Setting volcano=true in profile "addons-160421"
I1108 09:10:38.967983 372364 addons.go:239] Setting addon volcano=true in "addons-160421"
I1108 09:10:38.968019 372364 host.go:66] Checking if "addons-160421" exists ...
I1108 09:10:38.968097 372364 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-160421"
I1108 09:10:38.968134 372364 addons.go:70] Setting ingress=true in profile "addons-160421"
I1108 09:10:38.968145 372364 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-160421"
I1108 09:10:38.968155 372364 addons.go:239] Setting addon ingress=true in "addons-160421"
I1108 09:10:38.968172 372364 host.go:66] Checking if "addons-160421" exists ...
I1108 09:10:38.968191 372364 host.go:66] Checking if "addons-160421" exists ...
I1108 09:10:38.967189 372364 host.go:66] Checking if "addons-160421" exists ...
I1108 09:10:38.968593 372364 host.go:66] Checking if "addons-160421" exists ...
I1108 09:10:38.969016 372364 addons.go:70] Setting default-storageclass=true in profile "addons-160421"
I1108 09:10:38.969042 372364 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-160421"
I1108 09:10:38.967093 372364 addons.go:70] Setting cloud-spanner=true in profile "addons-160421"
I1108 09:10:38.969070 372364 addons.go:239] Setting addon cloud-spanner=true in "addons-160421"
I1108 09:10:38.969106 372364 host.go:66] Checking if "addons-160421" exists ...
I1108 09:10:38.969436 372364 out.go:179] * Verifying Kubernetes components...
I1108 09:10:38.969578 372364 addons.go:70] Setting inspektor-gadget=true in profile "addons-160421"
I1108 09:10:38.969602 372364 addons.go:239] Setting addon inspektor-gadget=true in "addons-160421"
I1108 09:10:38.969685 372364 host.go:66] Checking if "addons-160421" exists ...
I1108 09:10:38.969815 372364 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-160421"
I1108 09:10:38.969815 372364 addons.go:70] Setting registry-creds=true in profile "addons-160421"
I1108 09:10:38.969837 372364 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-160421"
I1108 09:10:38.969841 372364 addons.go:239] Setting addon registry-creds=true in "addons-160421"
I1108 09:10:38.969861 372364 host.go:66] Checking if "addons-160421" exists ...
I1108 09:10:38.969864 372364 host.go:66] Checking if "addons-160421" exists ...
I1108 09:10:38.969786 372364 addons.go:70] Setting registry=true in profile "addons-160421"
I1108 09:10:38.969978 372364 addons.go:239] Setting addon registry=true in "addons-160421"
I1108 09:10:38.970004 372364 host.go:66] Checking if "addons-160421" exists ...
I1108 09:10:38.971192 372364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1108 09:10:38.973188 372364 host.go:66] Checking if "addons-160421" exists ...
W1108 09:10:38.975137 372364 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1108 09:10:38.975392 372364 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
I1108 09:10:38.975457 372364 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1108 09:10:38.975493 372364 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.5
I1108 09:10:38.975392 372364 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1108 09:10:38.975563 372364 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-160421"
I1108 09:10:38.976077 372364 host.go:66] Checking if "addons-160421" exists ...
I1108 09:10:38.977138 372364 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
I1108 09:10:38.977150 372364 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1108 09:10:38.977164 372364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1108 09:10:38.977170 372364 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1108 09:10:38.977190 372364 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1108 09:10:38.977208 372364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1108 09:10:38.977143 372364 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1108 09:10:38.977309 372364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1108 09:10:38.978134 372364 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
I1108 09:10:38.978143 372364 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1108 09:10:38.978145 372364 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1108 09:10:38.978175 372364 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1108 09:10:38.978337 372364 addons.go:239] Setting addon default-storageclass=true in "addons-160421"
I1108 09:10:38.979440 372364 host.go:66] Checking if "addons-160421" exists ...
I1108 09:10:38.979674 372364 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
I1108 09:10:38.979682 372364 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1108 09:10:38.979696 372364 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1108 09:10:38.979063 372364 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
I1108 09:10:38.979045 372364 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
I1108 09:10:38.980825 372364 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1108 09:10:38.980842 372364 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1108 09:10:38.980854 372364 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1108 09:10:38.980898 372364 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1108 09:10:38.980870 372364 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1108 09:10:38.981189 372364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1108 09:10:38.981773 372364 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
I1108 09:10:38.982534 372364 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1108 09:10:38.982537 372364 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1108 09:10:38.982565 372364 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1108 09:10:38.982931 372364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1108 09:10:38.982615 372364 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
I1108 09:10:38.982957 372364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1108 09:10:38.982664 372364 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
I1108 09:10:38.983028 372364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
I1108 09:10:38.984308 372364 out.go:179] - Using image docker.io/registry:3.0.0
I1108 09:10:38.984669 372364 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1108 09:10:38.984793 372364 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1108 09:10:38.985141 372364 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1108 09:10:38.985141 372364 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
I1108 09:10:38.985899 372364 out.go:179] - Using image docker.io/busybox:stable
I1108 09:10:38.985914 372364 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
I1108 09:10:38.986311 372364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1108 09:10:38.986325 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.986423 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.986753 372364 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1108 09:10:38.987067 372364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1108 09:10:38.987525 372364 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1108 09:10:38.987550 372364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1108 09:10:38.987874 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.987949 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.988361 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:38.988397 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.988407 372364 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1108 09:10:38.988479 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:38.988509 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.989312 372364 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/id_rsa Username:docker}
I1108 09:10:38.989319 372364 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/id_rsa Username:docker}
I1108 09:10:38.990234 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:38.990284 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.990317 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:38.990346 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.990636 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.990804 372364 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/id_rsa Username:docker}
I1108 09:10:38.991118 372364 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/id_rsa Username:docker}
I1108 09:10:38.991162 372364 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1108 09:10:38.992025 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.992591 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:38.992625 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.993131 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.993382 372364 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/id_rsa Username:docker}
I1108 09:10:38.993422 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:38.993454 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.993826 372364 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1108 09:10:38.994279 372364 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/id_rsa Username:docker}
I1108 09:10:38.994607 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:38.994642 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.995065 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.995319 372364 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/id_rsa Username:docker}
I1108 09:10:38.995451 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.995897 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:38.995909 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.995934 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.996074 372364 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1108 09:10:38.996173 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:38.996205 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.996321 372364 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/id_rsa Username:docker}
I1108 09:10:38.996461 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.996729 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.996805 372364 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/id_rsa Username:docker}
I1108 09:10:38.996835 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:38.996902 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.997600 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.997602 372364 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/id_rsa Username:docker}
I1108 09:10:38.997623 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:38.997651 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.997670 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.998235 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:38.998288 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.998311 372364 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/id_rsa Username:docker}
I1108 09:10:38.998572 372364 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/id_rsa Username:docker}
I1108 09:10:38.998763 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:38.998826 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:38.998849 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.998864 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:38.999030 372364 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1108 09:10:38.999068 372364 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/id_rsa Username:docker}
I1108 09:10:38.999243 372364 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/id_rsa Username:docker}
I1108 09:10:39.000564 372364 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1108 09:10:39.000586 372364 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1108 09:10:39.003379 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:39.003798 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:39.003852 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:39.004036 372364 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/id_rsa Username:docker}
W1108 09:10:39.248025 372364 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:46790->192.168.39.239:22: read: connection reset by peer
I1108 09:10:39.248066 372364 retry.go:31] will retry after 350.106859ms: ssh: handshake failed: read tcp 192.168.39.1:46790->192.168.39.239:22: read: connection reset by peer
I1108 09:10:39.412669 372364 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
I1108 09:10:39.412701 372364 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1108 09:10:39.457702 372364 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
I1108 09:10:39.457735 372364 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1108 09:10:39.525987 372364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1108 09:10:39.581876 372364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1108 09:10:39.608969 372364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1108 09:10:39.614477 372364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1108 09:10:39.645707 372364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1108 09:10:39.648467 372364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1108 09:10:39.648468 372364 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1108 09:10:39.659355 372364 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1108 09:10:39.659389 372364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1108 09:10:39.659973 372364 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
I1108 09:10:39.660000 372364 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1108 09:10:39.660860 372364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1108 09:10:39.660863 372364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1108 09:10:39.662288 372364 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1108 09:10:39.662310 372364 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1108 09:10:39.667009 372364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
I1108 09:10:39.725475 372364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1108 09:10:39.738523 372364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1108 09:10:39.797430 372364 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1108 09:10:39.797477 372364 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1108 09:10:39.889939 372364 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1108 09:10:39.889973 372364 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1108 09:10:39.898626 372364 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
I1108 09:10:39.898645 372364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1108 09:10:39.940982 372364 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1108 09:10:39.941026 372364 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1108 09:10:40.078062 372364 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
I1108 09:10:40.078095 372364 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1108 09:10:40.081202 372364 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1108 09:10:40.081227 372364 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1108 09:10:40.152815 372364 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1108 09:10:40.152847 372364 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1108 09:10:40.159202 372364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1108 09:10:40.192079 372364 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1108 09:10:40.192110 372364 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1108 09:10:40.408596 372364 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
I1108 09:10:40.408619 372364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1108 09:10:40.463379 372364 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1108 09:10:40.463417 372364 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1108 09:10:40.564500 372364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1108 09:10:40.629121 372364 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1108 09:10:40.629163 372364 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1108 09:10:40.706948 372364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1108 09:10:40.870092 372364 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1108 09:10:40.870127 372364 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1108 09:10:41.103164 372364 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1108 09:10:41.103195 372364 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1108 09:10:41.372586 372364 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1108 09:10:41.372609 372364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1108 09:10:41.621943 372364 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1108 09:10:41.621982 372364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1108 09:10:42.097735 372364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1108 09:10:42.202030 372364 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1108 09:10:42.202058 372364 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1108 09:10:42.708201 372364 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1108 09:10:42.708235 372364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1108 09:10:42.973218 372364 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1108 09:10:42.973257 372364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1108 09:10:43.420416 372364 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1108 09:10:43.420466 372364 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1108 09:10:43.783452 372364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1108 09:10:44.631474 372364 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.105436888s)
I1108 09:10:44.631488 372364 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.049579414s)
I1108 09:10:44.631527 372364 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.022532501s)
I1108 09:10:44.631585 372364 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.017085935s)
I1108 09:10:45.643823 372364 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.998062667s)
I1108 09:10:45.643920 372364 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.995410971s)
I1108 09:10:45.643949 372364 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1108 09:10:45.643964 372364 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.995414492s)
I1108 09:10:45.645002 372364 node_ready.go:35] waiting up to 6m0s for node "addons-160421" to be "Ready" ...
I1108 09:10:45.792647 372364 node_ready.go:49] node "addons-160421" is "Ready"
I1108 09:10:45.792677 372364 node_ready.go:38] duration metric: took 147.642415ms for node "addons-160421" to be "Ready" ...
I1108 09:10:45.792692 372364 api_server.go:52] waiting for apiserver process to appear ...
I1108 09:10:45.792748 372364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1108 09:10:46.206089 372364 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-160421" context rescaled to 1 replicas
I1108 09:10:46.438360 372364 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1108 09:10:46.441649 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:46.442149 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:46.442182 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:46.442383 372364 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/id_rsa Username:docker}
I1108 09:10:46.774605 372364 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1108 09:10:46.894729 372364 addons.go:239] Setting addon gcp-auth=true in "addons-160421"
I1108 09:10:46.894800 372364 host.go:66] Checking if "addons-160421" exists ...
I1108 09:10:46.896719 372364 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1108 09:10:46.899588 372364 main.go:143] libmachine: domain addons-160421 has defined MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:46.900097 372364 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:c0:76", ip: ""} in network mk-addons-160421: {Iface:virbr1 ExpiryTime:2025-11-08 10:10:14 +0000 UTC Type:0 Mac:52:54:00:45:c0:76 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-160421 Clientid:01:52:54:00:45:c0:76}
I1108 09:10:46.900130 372364 main.go:143] libmachine: domain addons-160421 has defined IP address 192.168.39.239 and MAC address 52:54:00:45:c0:76 in network mk-addons-160421
I1108 09:10:46.900318 372364 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21865-367706/.minikube/machines/addons-160421/id_rsa Username:docker}
I1108 09:10:47.276715 372364 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.615772143s)
I1108 09:10:47.276777 372364 addons.go:480] Verifying addon ingress=true in "addons-160421"
I1108 09:10:47.276797 372364 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.615896856s)
I1108 09:10:47.276948 372364 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.551446752s)
I1108 09:10:47.276891 372364 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (7.609852826s)
I1108 09:10:47.277051 372364 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.538488277s)
I1108 09:10:47.277096 372364 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.11786215s)
I1108 09:10:47.277260 372364 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.712713623s)
I1108 09:10:47.277282 372364 addons.go:480] Verifying addon metrics-server=true in "addons-160421"
I1108 09:10:47.277314 372364 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.57033299s)
I1108 09:10:47.277343 372364 addons.go:480] Verifying addon registry=true in "addons-160421"
I1108 09:10:47.278650 372364 out.go:179] * Verifying ingress addon...
I1108 09:10:47.278655 372364 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-160421 service yakd-dashboard -n yakd-dashboard
I1108 09:10:47.279526 372364 out.go:179] * Verifying registry addon...
I1108 09:10:47.281234 372364 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1108 09:10:47.281720 372364 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1108 09:10:47.288653 372364 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1108 09:10:47.288686 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:10:47.288653 372364 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1108 09:10:47.288725 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:10:47.826282 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:10:47.826389 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:10:48.330056 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:10:48.331558 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:10:48.580174 372364 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.482376386s)
W1108 09:10:48.580268 372364 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1108 09:10:48.580304 372364 retry.go:31] will retry after 196.481502ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1108 09:10:48.614177 372364 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.830646988s)
I1108 09:10:48.614234 372364 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.821458467s)
I1108 09:10:48.614240 372364 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-160421"
I1108 09:10:48.614282 372364 api_server.go:72] duration metric: took 9.647238918s to wait for apiserver process to appear ...
I1108 09:10:48.614296 372364 api_server.go:88] waiting for apiserver healthz status ...
I1108 09:10:48.614321 372364 api_server.go:253] Checking apiserver healthz at https://192.168.39.239:8443/healthz ...
I1108 09:10:48.614324 372364 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.717570537s)
I1108 09:10:48.616090 372364 out.go:179] * Verifying csi-hostpath-driver addon...
I1108 09:10:48.616091 372364 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
I1108 09:10:48.617412 372364 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1108 09:10:48.618011 372364 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1108 09:10:48.618546 372364 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1108 09:10:48.618564 372364 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1108 09:10:48.659083 372364 api_server.go:279] https://192.168.39.239:8443/healthz returned 200:
ok
I1108 09:10:48.664018 372364 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1108 09:10:48.664046 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:10:48.672694 372364 api_server.go:141] control plane version: v1.34.1
I1108 09:10:48.672731 372364 api_server.go:131] duration metric: took 58.428127ms to wait for apiserver health ...
I1108 09:10:48.672745 372364 system_pods.go:43] waiting for kube-system pods to appear ...
I1108 09:10:48.700337 372364 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1108 09:10:48.700385 372364 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1108 09:10:48.734895 372364 system_pods.go:59] 19 kube-system pods found
I1108 09:10:48.734933 372364 system_pods.go:61] "amd-gpu-device-plugin-sfn75" [7cc0cd00-700b-4aaf-a7c2-43a44c3c4507] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1108 09:10:48.734940 372364 system_pods.go:61] "coredns-66bc5c9577-59bdx" [795b6325-a86a-4023-91cd-f225c7256ed9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1108 09:10:48.734948 372364 system_pods.go:61] "coredns-66bc5c9577-gs4wg" [8eec2bf1-2966-403e-932c-9b9798e0e186] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1108 09:10:48.734954 372364 system_pods.go:61] "csi-hostpath-attacher-0" [fd32b420-8ec1-4e9d-913b-33c360e14624] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1108 09:10:48.734958 372364 system_pods.go:61] "csi-hostpath-resizer-0" [c456b782-c4ad-441a-add3-510c4cd0e41e] Pending
I1108 09:10:48.734963 372364 system_pods.go:61] "csi-hostpathplugin-g7xzl" [3aa5ae60-b423-4f48-8259-7020f5d32ba5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1108 09:10:48.734966 372364 system_pods.go:61] "etcd-addons-160421" [f18b66d0-f682-4145-b3b2-7d49d8540e71] Running
I1108 09:10:48.734971 372364 system_pods.go:61] "kube-apiserver-addons-160421" [0a3a7496-a14c-41fa-af24-e20cc244f89f] Running
I1108 09:10:48.734975 372364 system_pods.go:61] "kube-controller-manager-addons-160421" [33949205-c3b0-49cd-ba16-e4aa705accb0] Running
I1108 09:10:48.734989 372364 system_pods.go:61] "kube-ingress-dns-minikube" [2c3251b7-c4f2-40cb-a567-fb7107e49f34] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1108 09:10:48.734995 372364 system_pods.go:61] "kube-proxy-zrq2p" [62157653-cdee-4388-8995-5967ab1c69d0] Running
I1108 09:10:48.735000 372364 system_pods.go:61] "kube-scheduler-addons-160421" [88be3f41-8bc2-4d28-a58f-903bea3613e8] Running
I1108 09:10:48.735008 372364 system_pods.go:61] "metrics-server-85b7d694d7-sr5hr" [8706fd9e-e7f5-42ad-bd41-0ae9f8689269] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1108 09:10:48.735019 372364 system_pods.go:61] "nvidia-device-plugin-daemonset-j5m5m" [37898bb0-ebc2-4792-b515-72f09a9e601d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1108 09:10:48.735025 372364 system_pods.go:61] "registry-6b586f9694-h2r45" [0e4fcc1e-4eb7-481f-b623-f89eae5d7acc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1108 09:10:48.735032 372364 system_pods.go:61] "registry-creds-764b6fb674-44krb" [0aa34e77-60b7-4ecf-b227-7cea950bde34] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1108 09:10:48.735041 372364 system_pods.go:61] "registry-proxy-s5plg" [ad07ccbe-c182-486a-b264-7ac34c02650c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1108 09:10:48.735046 372364 system_pods.go:61] "snapshot-controller-7d9fbc56b8-zf2lj" [ed962db0-cac8-4ee7-9ade-1e65f4d90ae0] Pending
I1108 09:10:48.735053 372364 system_pods.go:61] "storage-provisioner" [cd1a474f-cce9-415c-8ea9-6c22e5214dd8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1108 09:10:48.735060 372364 system_pods.go:74] duration metric: took 62.307121ms to wait for pod list to return data ...
I1108 09:10:48.735070 372364 default_sa.go:34] waiting for default service account to be created ...
I1108 09:10:48.777349 372364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1108 09:10:48.783737 372364 default_sa.go:45] found service account: "default"
I1108 09:10:48.783762 372364 default_sa.go:55] duration metric: took 48.686173ms for default service account to be created ...
I1108 09:10:48.783771 372364 system_pods.go:116] waiting for k8s-apps to be running ...
I1108 09:10:48.791316 372364 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1108 09:10:48.791340 372364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1108 09:10:48.828703 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:10:48.829609 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:10:48.830434 372364 system_pods.go:86] 20 kube-system pods found
I1108 09:10:48.830474 372364 system_pods.go:89] "amd-gpu-device-plugin-sfn75" [7cc0cd00-700b-4aaf-a7c2-43a44c3c4507] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1108 09:10:48.830499 372364 system_pods.go:89] "coredns-66bc5c9577-59bdx" [795b6325-a86a-4023-91cd-f225c7256ed9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1108 09:10:48.830518 372364 system_pods.go:89] "coredns-66bc5c9577-gs4wg" [8eec2bf1-2966-403e-932c-9b9798e0e186] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1108 09:10:48.830529 372364 system_pods.go:89] "csi-hostpath-attacher-0" [fd32b420-8ec1-4e9d-913b-33c360e14624] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1108 09:10:48.830541 372364 system_pods.go:89] "csi-hostpath-resizer-0" [c456b782-c4ad-441a-add3-510c4cd0e41e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1108 09:10:48.830553 372364 system_pods.go:89] "csi-hostpathplugin-g7xzl" [3aa5ae60-b423-4f48-8259-7020f5d32ba5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1108 09:10:48.830565 372364 system_pods.go:89] "etcd-addons-160421" [f18b66d0-f682-4145-b3b2-7d49d8540e71] Running
I1108 09:10:48.830571 372364 system_pods.go:89] "kube-apiserver-addons-160421" [0a3a7496-a14c-41fa-af24-e20cc244f89f] Running
I1108 09:10:48.830589 372364 system_pods.go:89] "kube-controller-manager-addons-160421" [33949205-c3b0-49cd-ba16-e4aa705accb0] Running
I1108 09:10:48.830602 372364 system_pods.go:89] "kube-ingress-dns-minikube" [2c3251b7-c4f2-40cb-a567-fb7107e49f34] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1108 09:10:48.830607 372364 system_pods.go:89] "kube-proxy-zrq2p" [62157653-cdee-4388-8995-5967ab1c69d0] Running
I1108 09:10:48.830613 372364 system_pods.go:89] "kube-scheduler-addons-160421" [88be3f41-8bc2-4d28-a58f-903bea3613e8] Running
I1108 09:10:48.830634 372364 system_pods.go:89] "metrics-server-85b7d694d7-sr5hr" [8706fd9e-e7f5-42ad-bd41-0ae9f8689269] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1108 09:10:48.830643 372364 system_pods.go:89] "nvidia-device-plugin-daemonset-j5m5m" [37898bb0-ebc2-4792-b515-72f09a9e601d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1108 09:10:48.830652 372364 system_pods.go:89] "registry-6b586f9694-h2r45" [0e4fcc1e-4eb7-481f-b623-f89eae5d7acc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1108 09:10:48.830665 372364 system_pods.go:89] "registry-creds-764b6fb674-44krb" [0aa34e77-60b7-4ecf-b227-7cea950bde34] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1108 09:10:48.830674 372364 system_pods.go:89] "registry-proxy-s5plg" [ad07ccbe-c182-486a-b264-7ac34c02650c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1108 09:10:48.830680 372364 system_pods.go:89] "snapshot-controller-7d9fbc56b8-hpd6n" [16445ae3-d54e-473e-bab6-5185c1fc3bfd] Pending
I1108 09:10:48.830686 372364 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zf2lj" [ed962db0-cac8-4ee7-9ade-1e65f4d90ae0] Pending
I1108 09:10:48.830693 372364 system_pods.go:89] "storage-provisioner" [cd1a474f-cce9-415c-8ea9-6c22e5214dd8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1108 09:10:48.830706 372364 system_pods.go:126] duration metric: took 46.928644ms to wait for k8s-apps to be running ...
I1108 09:10:48.830719 372364 system_svc.go:44] waiting for kubelet service to be running ....
I1108 09:10:48.830784 372364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1108 09:10:48.970703 372364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1108 09:10:49.126821 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:10:49.292116 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:10:49.292843 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:10:49.628676 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:10:49.790678 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:10:49.792939 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:10:50.126513 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:10:50.290654 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:10:50.291634 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:10:50.629701 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:10:50.789770 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:10:50.797107 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:10:51.029189 372364 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.251769187s)
I1108 09:10:51.029232 372364 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.198417171s)
I1108 09:10:51.029275 372364 system_svc.go:56] duration metric: took 2.198550618s WaitForService to wait for kubelet
I1108 09:10:51.029290 372364 kubeadm.go:587] duration metric: took 12.062242822s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1108 09:10:51.029319 372364 node_conditions.go:102] verifying NodePressure condition ...
I1108 09:10:51.029336 372364 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.058595343s)
I1108 09:10:51.030385 372364 addons.go:480] Verifying addon gcp-auth=true in "addons-160421"
I1108 09:10:51.032199 372364 out.go:179] * Verifying gcp-auth addon...
I1108 09:10:51.034337 372364 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1108 09:10:51.038734 372364 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1108 09:10:51.038783 372364 node_conditions.go:123] node cpu capacity is 2
I1108 09:10:51.038804 372364 node_conditions.go:105] duration metric: took 9.477312ms to run NodePressure ...
I1108 09:10:51.038820 372364 start.go:242] waiting for startup goroutines ...
I1108 09:10:51.041568 372364 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1108 09:10:51.041590 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:10:51.125371 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:10:51.289464 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:10:51.289646 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:10:51.538304 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:10:51.639130 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:10:51.785322 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:10:51.785591 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:10:52.039343 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:10:52.126524 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:10:52.286141 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:10:52.286318 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:10:52.537401 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:10:52.624181 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:10:52.788300 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:10:52.788652 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:10:53.040990 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:10:53.123215 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:10:53.286979 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:10:53.287541 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:10:53.539791 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:10:53.627439 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:10:53.790918 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:10:53.794404 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:10:54.037654 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:10:54.122216 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:10:54.290509 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:10:54.292341 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:10:54.539740 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:10:54.623055 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:10:54.785009 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:10:54.787043 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:10:55.041187 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:10:55.140980 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:10:55.287951 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:10:55.288327 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:10:55.541218 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:10:55.767636 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:10:55.785461 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:10:55.786416 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:10:56.040916 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:10:56.124575 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:10:56.286500 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:10:56.286949 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:10:56.538038 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:10:56.622612 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:10:56.785597 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:10:56.785828 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:10:57.038322 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:10:57.122514 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:10:57.287671 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:10:57.287849 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:10:57.538386 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:10:57.622096 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:10:57.787605 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:10:57.787668 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:10:58.038051 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:10:58.122301 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:10:58.287367 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:10:58.288122 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:10:58.540180 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:10:58.623869 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:10:58.785467 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:10:58.789863 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:10:59.039180 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:10:59.122881 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:10:59.287796 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:10:59.288012 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:10:59.539435 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:10:59.622501 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:10:59.786019 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:10:59.787041 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:00.041499 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:00.122240 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:00.289174 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:00.290995 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:00.540530 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:00.623238 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:00.785206 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:00.785788 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:01.038594 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:01.122387 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:01.286131 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:01.289497 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:01.539478 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:01.622666 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:01.787296 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:01.789955 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:02.041968 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:02.124546 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:02.291991 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:02.292029 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:02.539590 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:02.622528 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:02.786344 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:02.786343 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:03.038516 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:03.122630 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:03.285662 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:03.286334 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:03.538361 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:03.622075 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:03.785793 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:03.786521 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:04.039462 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:04.122018 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:04.287844 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:04.290175 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:04.541173 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:04.622778 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:04.785547 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:04.787692 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:05.038972 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:05.123559 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:05.287826 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:05.291856 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:05.538689 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:05.623344 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:05.956629 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:05.967790 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:06.058132 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:06.123675 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:06.288769 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:06.291228 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:06.540730 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:06.622187 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:06.786332 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:06.786703 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:07.038424 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:07.121626 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:07.296383 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:07.303803 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:07.540071 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:07.623828 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:07.972859 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:07.984804 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:08.040081 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:08.123035 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:08.285711 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:08.285720 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:08.539437 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:08.624638 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:08.785775 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:08.786500 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:09.039125 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:09.122350 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:09.285993 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:09.288546 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:09.540351 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:09.623772 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:09.787119 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:09.788124 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:10.196139 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:10.202766 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:10.286124 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:10.287597 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:10.540686 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:10.626321 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:10.786779 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:10.788557 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:11.181145 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:11.184442 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:11.286413 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:11.287468 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:11.538011 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:11.623358 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:11.785221 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:11.787290 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:12.038922 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:12.123641 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:12.286137 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:12.286409 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:12.538765 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:12.622376 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:12.788438 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:12.789968 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:13.038168 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:13.121928 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:13.286110 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:13.286267 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:13.538236 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:13.621590 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:13.785385 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:13.787701 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:14.040269 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:14.137035 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:14.291845 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:14.293011 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:14.538296 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:14.621517 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:14.786663 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:14.788213 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:15.039456 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:15.122011 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:15.286180 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:15.286436 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:15.540159 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:15.641170 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:15.790545 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:15.792005 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:16.039926 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:16.122955 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:16.285933 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:16.287035 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:16.539132 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:16.621647 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:16.789330 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:16.791041 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:17.084871 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:17.122579 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:17.287185 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:17.287723 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:17.537558 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:17.622911 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:17.785756 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:17.786645 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:18.038830 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:18.122587 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:18.286881 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:18.287164 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:18.538603 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:18.622865 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:18.785725 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:18.786118 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:19.038587 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:19.139957 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:19.291206 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:19.291426 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:19.538684 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:19.623627 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:19.785312 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:19.785677 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:20.039425 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:20.122286 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:20.284078 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:20.285460 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:20.538174 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:20.621430 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:20.786321 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:20.786425 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:21.038419 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:21.122137 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:21.285289 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:21.285383 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:21.537657 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:21.622424 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:21.784910 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:21.786221 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:22.037665 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:22.122589 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:22.284470 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:22.285827 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:22.538161 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:22.621321 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:22.786271 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:22.786397 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:23.037652 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:23.121741 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:23.284796 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:23.286361 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:23.539242 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:23.622686 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:23.787868 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:23.787924 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:24.037335 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:24.125723 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:24.293128 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:24.293518 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:24.538354 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:24.624771 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:24.786093 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:24.786325 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:25.037469 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:25.122576 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:25.285930 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:25.286404 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:25.538454 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:25.622540 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:25.800981 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1108 09:11:25.804709 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:26.039883 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:26.123755 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:26.287103 372364 kapi.go:107] duration metric: took 39.00537731s to wait for kubernetes.io/minikube-addons=registry ...
I1108 09:11:26.287467 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:26.539173 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:26.641862 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:26.785794 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:27.038374 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:27.121987 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:27.285346 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:27.538763 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:27.622475 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:27.784637 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:28.037650 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:28.122711 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:28.284959 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:28.539401 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:28.624096 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:28.798145 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:29.045212 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:29.125935 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:29.285659 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:29.539064 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:29.626653 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:29.789194 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:30.041723 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:30.127187 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:30.290994 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:30.541232 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:30.630905 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:30.785355 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:31.039186 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:31.122724 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:31.284875 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:31.538422 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:31.622807 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:31.785201 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:32.051367 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:32.149134 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:32.286031 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:32.538326 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:32.622646 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:32.785139 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:33.038497 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:33.123221 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:33.287581 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:33.538902 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:33.622502 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:33.786617 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:34.040143 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:34.123176 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:34.285818 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:34.834100 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:34.834460 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:34.834460 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:35.039460 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:35.123518 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:35.285640 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:35.539509 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:35.623713 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:35.785978 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:36.038686 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:36.122382 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:36.284327 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:36.539318 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:36.622314 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:36.792969 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:37.038411 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:37.138671 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:37.292906 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:37.539426 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:37.623360 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:37.786226 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:38.038526 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:38.123111 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:38.287485 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:38.538334 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:38.624370 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:38.784411 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:39.041995 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:39.126849 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:39.286504 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:39.604158 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:39.627278 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:39.787477 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:40.038186 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:40.121804 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:40.285006 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:40.538851 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:40.622613 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:40.784880 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:41.038799 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:41.122093 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:41.285862 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:41.538168 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:41.622951 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:41.786548 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:42.041419 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:42.121888 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:42.288364 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:42.539052 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:42.623845 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:42.860064 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:43.038649 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:43.138833 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:43.286452 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:43.543050 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:43.623161 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:43.786313 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:44.037545 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:44.122269 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:44.287126 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:44.544801 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:44.623823 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:44.785622 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:45.046109 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:45.137328 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:45.288020 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:45.542470 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:45.629255 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:45.786308 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:46.039168 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:46.140753 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:46.287859 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:46.546686 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:46.648429 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:46.786400 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:47.039439 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:47.124319 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:47.287023 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:47.539041 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:47.640134 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:47.786565 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:48.037659 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:48.122875 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:48.287231 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:48.539035 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:48.624965 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:48.789752 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:49.040454 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:49.123375 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:49.288500 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:49.538227 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:49.625039 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:49.785339 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:50.040738 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:50.141512 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:50.294920 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:50.539138 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:50.622988 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:50.785598 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:51.044918 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:51.123637 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:51.285706 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:51.541962 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:51.622238 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:51.786709 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:52.038869 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:52.125049 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:52.286957 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:52.539217 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:52.623608 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:52.787118 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:53.039894 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:53.126328 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:53.284148 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:53.538263 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:53.623165 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:53.787949 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:54.039857 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:54.126035 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:54.286841 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:54.543672 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:54.627445 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:54.784887 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:55.042874 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:55.127806 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:55.290747 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:55.545076 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:55.625521 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:55.784927 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:56.041318 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:56.122579 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:56.286240 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:56.540415 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:56.623040 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:56.789148 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:57.097576 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:57.126398 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:57.286904 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:57.539461 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:57.621528 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:57.785302 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:58.041359 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:58.124371 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:58.284566 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:58.538700 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:58.622136 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:58.785915 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:59.039139 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:59.123054 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:59.288788 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:11:59.541727 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:11:59.642622 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:11:59.787020 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:12:00.041435 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:12:00.122192 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:12:00.287463 372364 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1108 09:12:00.542377 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:12:00.622134 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:12:00.785920 372364 kapi.go:107] duration metric: took 1m13.504684172s to wait for app.kubernetes.io/name=ingress-nginx ...
I1108 09:12:01.090725 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:12:01.204154 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1108 09:12:01.539290 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:12:01.639878 372364 kapi.go:107] duration metric: took 1m13.021861867s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1108 09:12:02.039706 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:12:02.538297 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:12:03.041094 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:12:03.542035 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:12:04.042881 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:12:04.539649 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:12:05.042069 372364 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1108 09:12:05.538165 372364 kapi.go:107] duration metric: took 1m14.503827738s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1108 09:12:05.540033 372364 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-160421 cluster.
I1108 09:12:05.541275 372364 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1108 09:12:05.542679 372364 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1108 09:12:05.544214 372364 out.go:179] * Enabled addons: storage-provisioner, amd-gpu-device-plugin, nvidia-device-plugin, default-storageclass, storage-provisioner-rancher, cloud-spanner, ingress-dns, inspektor-gadget, registry-creds, metrics-server, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
I1108 09:12:05.545498 372364 addons.go:515] duration metric: took 1m26.578533359s for enable addons: enabled=[storage-provisioner amd-gpu-device-plugin nvidia-device-plugin default-storageclass storage-provisioner-rancher cloud-spanner ingress-dns inspektor-gadget registry-creds metrics-server yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
I1108 09:12:05.545557 372364 start.go:247] waiting for cluster config update ...
I1108 09:12:05.545590 372364 start.go:256] writing updated cluster config ...
I1108 09:12:05.545903 372364 ssh_runner.go:195] Run: rm -f paused
I1108 09:12:05.552366 372364 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1108 09:12:05.557080 372364 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-59bdx" in "kube-system" namespace to be "Ready" or be gone ...
I1108 09:12:05.562175 372364 pod_ready.go:94] pod "coredns-66bc5c9577-59bdx" is "Ready"
I1108 09:12:05.562199 372364 pod_ready.go:86] duration metric: took 5.094056ms for pod "coredns-66bc5c9577-59bdx" in "kube-system" namespace to be "Ready" or be gone ...
I1108 09:12:05.564651 372364 pod_ready.go:83] waiting for pod "etcd-addons-160421" in "kube-system" namespace to be "Ready" or be gone ...
I1108 09:12:05.569279 372364 pod_ready.go:94] pod "etcd-addons-160421" is "Ready"
I1108 09:12:05.569298 372364 pod_ready.go:86] duration metric: took 4.627967ms for pod "etcd-addons-160421" in "kube-system" namespace to be "Ready" or be gone ...
I1108 09:12:05.571242 372364 pod_ready.go:83] waiting for pod "kube-apiserver-addons-160421" in "kube-system" namespace to be "Ready" or be gone ...
I1108 09:12:05.576055 372364 pod_ready.go:94] pod "kube-apiserver-addons-160421" is "Ready"
I1108 09:12:05.576074 372364 pod_ready.go:86] duration metric: took 4.801271ms for pod "kube-apiserver-addons-160421" in "kube-system" namespace to be "Ready" or be gone ...
I1108 09:12:05.578407 372364 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-160421" in "kube-system" namespace to be "Ready" or be gone ...
I1108 09:12:06.026476 372364 pod_ready.go:94] pod "kube-controller-manager-addons-160421" is "Ready"
I1108 09:12:06.026504 372364 pod_ready.go:86] duration metric: took 448.076021ms for pod "kube-controller-manager-addons-160421" in "kube-system" namespace to be "Ready" or be gone ...
I1108 09:12:06.157664 372364 pod_ready.go:83] waiting for pod "kube-proxy-zrq2p" in "kube-system" namespace to be "Ready" or be gone ...
I1108 09:12:06.558931 372364 pod_ready.go:94] pod "kube-proxy-zrq2p" is "Ready"
I1108 09:12:06.558978 372364 pod_ready.go:86] duration metric: took 401.28188ms for pod "kube-proxy-zrq2p" in "kube-system" namespace to be "Ready" or be gone ...
I1108 09:12:06.756761 372364 pod_ready.go:83] waiting for pod "kube-scheduler-addons-160421" in "kube-system" namespace to be "Ready" or be gone ...
I1108 09:12:07.156944 372364 pod_ready.go:94] pod "kube-scheduler-addons-160421" is "Ready"
I1108 09:12:07.156981 372364 pod_ready.go:86] duration metric: took 400.18603ms for pod "kube-scheduler-addons-160421" in "kube-system" namespace to be "Ready" or be gone ...
I1108 09:12:07.156997 372364 pod_ready.go:40] duration metric: took 1.604591547s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1108 09:12:07.204330 372364 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
I1108 09:12:07.206097 372364 out.go:179] * Done! kubectl is now configured to use "addons-160421" cluster and "default" namespace by default
==> CRI-O <==
Nov 08 09:15:16 addons-160421 crio[818]: time="2025-11-08 09:15:16.968786345Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b5d915f-05a6-46f2-9caf-d05165e504b3 name=/runtime.v1.RuntimeService/ListContainers
Nov 08 09:15:16 addons-160421 crio[818]: time="2025-11-08 09:15:16.969048326Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b5d915f-05a6-46f2-9caf-d05165e504b3 name=/runtime.v1.RuntimeService/ListContainers
Nov 08 09:15:16 addons-160421 crio[818]: time="2025-11-08 09:15:16.969651884Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:35455cc687664ad7ea2a7a475b36c5357e1cc6574050127550a51e9f0bb9e38d,PodSandboxId:54fcc8a7091b208908f388842ff76c30d2bf0869268e2d55f845ed694a2ace27,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1762593175151318668,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 680dd0bb-32c0-4828-b24e-4ab7a48348f6,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15bdc36ccdefb0e73550bb8af6fe116f7f965f148cbd6b5c4c404330c0a8f875,PodSandboxId:bc7ad81818dcfc02292a1713f2226d541642d5d08281345b3ed15f9af2bf5881,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1762593131659875787,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 754ffa90-f171-4a9f-b224-f46c5410a1f2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:240e71c7b9dd68f2fa91752289570623552ef6a1b050f0afdb70e562ec772d3c,PodSandboxId:9a0bccd23b65e30cd2665077accdd311247d767879b7a26275e5b91d14fe054f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1762593119490338931,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-tcjs2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a3e6e3ef-5205-4e9f-b0ae-aad7d2cbfb20,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:97e02ae9d1219f3f2ebe856c04843dc1732788cabd6bf99a42392ab8e1233c13,PodSandboxId:fd1fe8aa67b2bb800da00566272856d1054a3162e49cb865a3e8524954fd0838,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1762593096894193407,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qg7b7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c450565b-134c-4a3f-a3c0-6a80cf4a6103,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae9aa2310e3e5ebbcc4fb0e4aaa568022dec588e5efb514425464439cbb2dd66,PodSandboxId:e67e36c55384218475c9eeff09f93d0ff93538913d5b47e8fcdea96edd5fa7dd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1762593086567431950,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sg99v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b92770af-a526-4538-8bdf-c1e90d823cac,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8826d0668013cb7d849b4c99a3d3e26594264c0f758ef1e374a8351b130acf13,PodSandboxId:1bfd20382cb62b42ad51635001e5d92a2db256174f79d48f302842bc49c765a8,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1762593071582250017,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3251b7-c4f2-40cb-a567-fb7107e49f34,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af484047be20c45b18834a3656d42d8c57dca193c226458343d22a63e1dedb5a,PodSandboxId:58177cfbfd9e20bc3c743b865cb8d7a592223ce638b6001d427763c16f55c385,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1762593048290795967,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sfn75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cc0cd00-700b-4aaf-a7c2-43a44c3c4507,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0ccd253088f2367b00cf781bc8d42402a6ff98e4e500f3353333877a9d0ae02,PodSandboxId:2dbfd160c0e24fb123d03530aba3446c97b33483ff4459cea03853f7a2e835f2,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762593048066596564,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1a474f-cce9-415c-8ea9-6c22e5214dd8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26e18d944a5ee49170d7f220c04f42beb90e846fbf6c16142ca326879d9026c2,PodSandboxId:c97f9b14dae37932a1b80c8bfdb8e01fb2b757f3c4ca378c02229f288232636d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762593042281553839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-59bdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 795b6325-a86a-4023-91cd-f225c7256ed9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3080a601614a744aa67eeb580952b26b749b6e2e2e5a9c550b0465ca946e079,PodSandboxId:6d8cc3d14be2849427cf2320be72359e255ee98a2000b2948e0fb1d49c437a0f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762593042037288276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zrq2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62157653-cdee-4388-8995-5967ab1c69d0,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7791d1f47c8c2cb28321bdb60d72aac6d79180814dc27a52921bd401d0602d6,PodSandboxId:4ec5d82ffd8dca7fddee38d99dfd43366b7a082a07c851bc597a36784e849eda,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762593028332835939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-160421,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4f44f332de188d5d4f2cf677863b2e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9113c77e3c48bcdf293a4699e15f3abab44d71090df0fb7370f5cd944f6f0168,PodSandboxId:9c34212c0adbfd991bcbf793dd3a8587209d9bb9f5a2ead86402b1ea8b611705,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762593028325798072,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-160421,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58613835586725639763354eadcdb1c3,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-por
t\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c76e545ddea653c92937b79a050e57f2b272d131222053054a472bdced2470e,PodSandboxId:76c9467a442679908a51769d213f3b2229126932fb6c65a952f914d0e3e9a381,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762593028316009312,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-160421,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07e9d676a94ebdd801fdbffd7a
9f55f8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d33f409246215d46cbbffedf52462c032636541138c757cfd54d682688b48f,PodSandboxId:19fe64d121b1556de1a6e84ebed0de271fded205907481f1f7e8c55b9d3e5f4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762593028304132925,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-schedule
r-addons-160421,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840db15779a87bc565d6e18b68b6e6ed,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b5d915f-05a6-46f2-9caf-d05165e504b3 name=/runtime.v1.RuntimeService/ListContainers
Nov 08 09:15:16 addons-160421 crio[818]: time="2025-11-08 09:15:16.999768025Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/1.0" file="docker/docker_client.go:631"
Nov 08 09:15:17 addons-160421 crio[818]: time="2025-11-08 09:15:17.017296222Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1724ea77-6b24-4cda-8b23-5ce94201b0a8 name=/runtime.v1.RuntimeService/Version
Nov 08 09:15:17 addons-160421 crio[818]: time="2025-11-08 09:15:17.017407781Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1724ea77-6b24-4cda-8b23-5ce94201b0a8 name=/runtime.v1.RuntimeService/Version
Nov 08 09:15:17 addons-160421 crio[818]: time="2025-11-08 09:15:17.019106438Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2a64b7c2-de09-4da0-802b-15faab67c56e name=/runtime.v1.ImageService/ImageFsInfo
Nov 08 09:15:17 addons-160421 crio[818]: time="2025-11-08 09:15:17.020305961Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762593317020281769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588596,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2a64b7c2-de09-4da0-802b-15faab67c56e name=/runtime.v1.ImageService/ImageFsInfo
Nov 08 09:15:17 addons-160421 crio[818]: time="2025-11-08 09:15:17.020987607Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c6bf692b-0eb2-4535-9680-43502031b46f name=/runtime.v1.RuntimeService/ListContainers
Nov 08 09:15:17 addons-160421 crio[818]: time="2025-11-08 09:15:17.021056079Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c6bf692b-0eb2-4535-9680-43502031b46f name=/runtime.v1.RuntimeService/ListContainers
Nov 08 09:15:17 addons-160421 crio[818]: time="2025-11-08 09:15:17.021367621Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:35455cc687664ad7ea2a7a475b36c5357e1cc6574050127550a51e9f0bb9e38d,PodSandboxId:54fcc8a7091b208908f388842ff76c30d2bf0869268e2d55f845ed694a2ace27,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1762593175151318668,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 680dd0bb-32c0-4828-b24e-4ab7a48348f6,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15bdc36ccdefb0e73550bb8af6fe116f7f965f148cbd6b5c4c404330c0a8f875,PodSandboxId:bc7ad81818dcfc02292a1713f2226d541642d5d08281345b3ed15f9af2bf5881,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1762593131659875787,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 754ffa90-f171-4a9f-b224-f46c5410a1f2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:240e71c7b9dd68f2fa91752289570623552ef6a1b050f0afdb70e562ec772d3c,PodSandboxId:9a0bccd23b65e30cd2665077accdd311247d767879b7a26275e5b91d14fe054f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1762593119490338931,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-tcjs2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a3e6e3ef-5205-4e9f-b0ae-aad7d2cbfb20,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:97e02ae9d1219f3f2ebe856c04843dc1732788cabd6bf99a42392ab8e1233c13,PodSandboxId:fd1fe8aa67b2bb800da00566272856d1054a3162e49cb865a3e8524954fd0838,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1762593096894193407,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qg7b7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c450565b-134c-4a3f-a3c0-6a80cf4a6103,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae9aa2310e3e5ebbcc4fb0e4aaa568022dec588e5efb514425464439cbb2dd66,PodSandboxId:e67e36c55384218475c9eeff09f93d0ff93538913d5b47e8fcdea96edd5fa7dd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1762593086567431950,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sg99v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b92770af-a526-4538-8bdf-c1e90d823cac,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8826d0668013cb7d849b4c99a3d3e26594264c0f758ef1e374a8351b130acf13,PodSandboxId:1bfd20382cb62b42ad51635001e5d92a2db256174f79d48f302842bc49c765a8,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1762593071582250017,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3251b7-c4f2-40cb-a567-fb7107e49f34,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af484047be20c45b18834a3656d42d8c57dca193c226458343d22a63e1dedb5a,PodSandboxId:58177cfbfd9e20bc3c743b865cb8d7a592223ce638b6001d427763c16f55c385,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1762593048290795967,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sfn75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cc0cd00-700b-4aaf-a7c2-43a44c3c4507,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0ccd253088f2367b00cf781bc8d42402a6ff98e4e500f3353333877a9d0ae02,PodSandboxId:2dbfd160c0e24fb123d03530aba3446c97b33483ff4459cea03853f7a2e835f2,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762593048066596564,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1a474f-cce9-415c-8ea9-6c22e5214dd8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26e18d944a5ee49170d7f220c04f42beb90e846fbf6c16142ca326879d9026c2,PodSandboxId:c97f9b14dae37932a1b80c8bfdb8e01fb2b757f3c4ca378c02229f288232636d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762593042281553839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-59bdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 795b6325-a86a-4023-91cd-f225c7256ed9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3080a601614a744aa67eeb580952b26b749b6e2e2e5a9c550b0465ca946e079,PodSandboxId:6d8cc3d14be2849427cf2320be72359e255ee98a2000b2948e0fb1d49c437a0f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762593042037288276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zrq2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62157653-cdee-4388-8995-5967ab1c69d0,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7791d1f47c8c2cb28321bdb60d72aac6d79180814dc27a52921bd401d0602d6,PodSandboxId:4ec5d82ffd8dca7fddee38d99dfd43366b7a082a07c851bc597a36784e849eda,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762593028332835939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-160421,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4f44f332de188d5d4f2cf677863b2e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9113c77e3c48bcdf293a4699e15f3abab44d71090df0fb7370f5cd944f6f0168,PodSandboxId:9c34212c0adbfd991bcbf793dd3a8587209d9bb9f5a2ead86402b1ea8b611705,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762593028325798072,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-160421,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58613835586725639763354eadcdb1c3,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-por
t\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c76e545ddea653c92937b79a050e57f2b272d131222053054a472bdced2470e,PodSandboxId:76c9467a442679908a51769d213f3b2229126932fb6c65a952f914d0e3e9a381,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762593028316009312,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-160421,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07e9d676a94ebdd801fdbffd7a
9f55f8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d33f409246215d46cbbffedf52462c032636541138c757cfd54d682688b48f,PodSandboxId:19fe64d121b1556de1a6e84ebed0de271fded205907481f1f7e8c55b9d3e5f4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762593028304132925,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-schedule
r-addons-160421,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840db15779a87bc565d6e18b68b6e6ed,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c6bf692b-0eb2-4535-9680-43502031b46f name=/runtime.v1.RuntimeService/ListContainers
Nov 08 09:15:17 addons-160421 crio[818]: time="2025-11-08 09:15:17.056310735Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8bf6abf3-297b-40e0-b51b-f645d1bbc9f0 name=/runtime.v1.RuntimeService/Version
Nov 08 09:15:17 addons-160421 crio[818]: time="2025-11-08 09:15:17.056422251Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8bf6abf3-297b-40e0-b51b-f645d1bbc9f0 name=/runtime.v1.RuntimeService/Version
Nov 08 09:15:17 addons-160421 crio[818]: time="2025-11-08 09:15:17.057902702Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=003c35d7-9e0f-4b16-bcb4-df2d1f2e70bf name=/runtime.v1.ImageService/ImageFsInfo
Nov 08 09:15:17 addons-160421 crio[818]: time="2025-11-08 09:15:17.059196807Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762593317059171374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588596,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=003c35d7-9e0f-4b16-bcb4-df2d1f2e70bf name=/runtime.v1.ImageService/ImageFsInfo
Nov 08 09:15:17 addons-160421 crio[818]: time="2025-11-08 09:15:17.060223902Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=48f9a7ba-53df-41e6-aa11-0da895c468f8 name=/runtime.v1.RuntimeService/ListContainers
Nov 08 09:15:17 addons-160421 crio[818]: time="2025-11-08 09:15:17.060283724Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=48f9a7ba-53df-41e6-aa11-0da895c468f8 name=/runtime.v1.RuntimeService/ListContainers
Nov 08 09:15:17 addons-160421 crio[818]: time="2025-11-08 09:15:17.060598951Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:35455cc687664ad7ea2a7a475b36c5357e1cc6574050127550a51e9f0bb9e38d,PodSandboxId:54fcc8a7091b208908f388842ff76c30d2bf0869268e2d55f845ed694a2ace27,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1762593175151318668,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 680dd0bb-32c0-4828-b24e-4ab7a48348f6,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15bdc36ccdefb0e73550bb8af6fe116f7f965f148cbd6b5c4c404330c0a8f875,PodSandboxId:bc7ad81818dcfc02292a1713f2226d541642d5d08281345b3ed15f9af2bf5881,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1762593131659875787,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 754ffa90-f171-4a9f-b224-f46c5410a1f2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:240e71c7b9dd68f2fa91752289570623552ef6a1b050f0afdb70e562ec772d3c,PodSandboxId:9a0bccd23b65e30cd2665077accdd311247d767879b7a26275e5b91d14fe054f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1762593119490338931,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-tcjs2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a3e6e3ef-5205-4e9f-b0ae-aad7d2cbfb20,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:97e02ae9d1219f3f2ebe856c04843dc1732788cabd6bf99a42392ab8e1233c13,PodSandboxId:fd1fe8aa67b2bb800da00566272856d1054a3162e49cb865a3e8524954fd0838,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1762593096894193407,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qg7b7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c450565b-134c-4a3f-a3c0-6a80cf4a6103,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae9aa2310e3e5ebbcc4fb0e4aaa568022dec588e5efb514425464439cbb2dd66,PodSandboxId:e67e36c55384218475c9eeff09f93d0ff93538913d5b47e8fcdea96edd5fa7dd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1762593086567431950,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sg99v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b92770af-a526-4538-8bdf-c1e90d823cac,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8826d0668013cb7d849b4c99a3d3e26594264c0f758ef1e374a8351b130acf13,PodSandboxId:1bfd20382cb62b42ad51635001e5d92a2db256174f79d48f302842bc49c765a8,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1762593071582250017,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3251b7-c4f2-40cb-a567-fb7107e49f34,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af484047be20c45b18834a3656d42d8c57dca193c226458343d22a63e1dedb5a,PodSandboxId:58177cfbfd9e20bc3c743b865cb8d7a592223ce638b6001d427763c16f55c385,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1762593048290795967,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sfn75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cc0cd00-700b-4aaf-a7c2-43a44c3c4507,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0ccd253088f2367b00cf781bc8d42402a6ff98e4e500f3353333877a9d0ae02,PodSandboxId:2dbfd160c0e24fb123d03530aba3446c97b33483ff4459cea03853f7a2e835f2,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762593048066596564,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1a474f-cce9-415c-8ea9-6c22e5214dd8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26e18d944a5ee49170d7f220c04f42beb90e846fbf6c16142ca326879d9026c2,PodSandboxId:c97f9b14dae37932a1b80c8bfdb8e01fb2b757f3c4ca378c02229f288232636d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762593042281553839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-59bdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 795b6325-a86a-4023-91cd-f225c7256ed9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3080a601614a744aa67eeb580952b26b749b6e2e2e5a9c550b0465ca946e079,PodSandboxId:6d8cc3d14be2849427cf2320be72359e255ee98a2000b2948e0fb1d49c437a0f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762593042037288276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zrq2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62157653-cdee-4388-8995-5967ab1c69d0,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7791d1f47c8c2cb28321bdb60d72aac6d79180814dc27a52921bd401d0602d6,PodSandboxId:4ec5d82ffd8dca7fddee38d99dfd43366b7a082a07c851bc597a36784e849eda,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762593028332835939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-160421,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4f44f332de188d5d4f2cf677863b2e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9113c77e3c48bcdf293a4699e15f3abab44d71090df0fb7370f5cd944f6f0168,PodSandboxId:9c34212c0adbfd991bcbf793dd3a8587209d9bb9f5a2ead86402b1ea8b611705,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762593028325798072,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-160421,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58613835586725639763354eadcdb1c3,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-por
t\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c76e545ddea653c92937b79a050e57f2b272d131222053054a472bdced2470e,PodSandboxId:76c9467a442679908a51769d213f3b2229126932fb6c65a952f914d0e3e9a381,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762593028316009312,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-160421,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07e9d676a94ebdd801fdbffd7a
9f55f8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d33f409246215d46cbbffedf52462c032636541138c757cfd54d682688b48f,PodSandboxId:19fe64d121b1556de1a6e84ebed0de271fded205907481f1f7e8c55b9d3e5f4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762593028304132925,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-schedule
r-addons-160421,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840db15779a87bc565d6e18b68b6e6ed,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=48f9a7ba-53df-41e6-aa11-0da895c468f8 name=/runtime.v1.RuntimeService/ListContainers
Nov 08 09:15:17 addons-160421 crio[818]: time="2025-11-08 09:15:17.096131081Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f89f525c-740b-42b2-9514-a0026f247e3f name=/runtime.v1.RuntimeService/Version
Nov 08 09:15:17 addons-160421 crio[818]: time="2025-11-08 09:15:17.096216997Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f89f525c-740b-42b2-9514-a0026f247e3f name=/runtime.v1.RuntimeService/Version
Nov 08 09:15:17 addons-160421 crio[818]: time="2025-11-08 09:15:17.097600058Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=08762e6f-ec36-4e09-8945-3d3b9c287376 name=/runtime.v1.ImageService/ImageFsInfo
Nov 08 09:15:17 addons-160421 crio[818]: time="2025-11-08 09:15:17.099517828Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1762593317099492171,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588596,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=08762e6f-ec36-4e09-8945-3d3b9c287376 name=/runtime.v1.ImageService/ImageFsInfo
Nov 08 09:15:17 addons-160421 crio[818]: time="2025-11-08 09:15:17.100110907Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ebbe5c7b-a753-4938-802d-8d15406bea8e name=/runtime.v1.RuntimeService/ListContainers
Nov 08 09:15:17 addons-160421 crio[818]: time="2025-11-08 09:15:17.100216260Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ebbe5c7b-a753-4938-802d-8d15406bea8e name=/runtime.v1.RuntimeService/ListContainers
Nov 08 09:15:17 addons-160421 crio[818]: time="2025-11-08 09:15:17.100559852Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:35455cc687664ad7ea2a7a475b36c5357e1cc6574050127550a51e9f0bb9e38d,PodSandboxId:54fcc8a7091b208908f388842ff76c30d2bf0869268e2d55f845ed694a2ace27,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1762593175151318668,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 680dd0bb-32c0-4828-b24e-4ab7a48348f6,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15bdc36ccdefb0e73550bb8af6fe116f7f965f148cbd6b5c4c404330c0a8f875,PodSandboxId:bc7ad81818dcfc02292a1713f2226d541642d5d08281345b3ed15f9af2bf5881,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1762593131659875787,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 754ffa90-f171-4a9f-b224-f46c5410a1f2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:240e71c7b9dd68f2fa91752289570623552ef6a1b050f0afdb70e562ec772d3c,PodSandboxId:9a0bccd23b65e30cd2665077accdd311247d767879b7a26275e5b91d14fe054f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1762593119490338931,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-tcjs2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a3e6e3ef-5205-4e9f-b0ae-aad7d2cbfb20,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:97e02ae9d1219f3f2ebe856c04843dc1732788cabd6bf99a42392ab8e1233c13,PodSandboxId:fd1fe8aa67b2bb800da00566272856d1054a3162e49cb865a3e8524954fd0838,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1762593096894193407,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qg7b7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c450565b-134c-4a3f-a3c0-6a80cf4a6103,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae9aa2310e3e5ebbcc4fb0e4aaa568022dec588e5efb514425464439cbb2dd66,PodSandboxId:e67e36c55384218475c9eeff09f93d0ff93538913d5b47e8fcdea96edd5fa7dd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1762593086567431950,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sg99v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b92770af-a526-4538-8bdf-c1e90d823cac,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8826d0668013cb7d849b4c99a3d3e26594264c0f758ef1e374a8351b130acf13,PodSandboxId:1bfd20382cb62b42ad51635001e5d92a2db256174f79d48f302842bc49c765a8,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1762593071582250017,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3251b7-c4f2-40cb-a567-fb7107e49f34,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af484047be20c45b18834a3656d42d8c57dca193c226458343d22a63e1dedb5a,PodSandboxId:58177cfbfd9e20bc3c743b865cb8d7a592223ce638b6001d427763c16f55c385,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1762593048290795967,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-sfn75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cc0cd00-700b-4aaf-a7c2-43a44c3c4507,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0ccd253088f2367b00cf781bc8d42402a6ff98e4e500f3353333877a9d0ae02,PodSandboxId:2dbfd160c0e24fb123d03530aba3446c97b33483ff4459cea03853f7a2e835f2,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1762593048066596564,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1a474f-cce9-415c-8ea9-6c22e5214dd8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26e18d944a5ee49170d7f220c04f42beb90e846fbf6c16142ca326879d9026c2,PodSandboxId:c97f9b14dae37932a1b80c8bfdb8e01fb2b757f3c4ca378c02229f288232636d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1762593042281553839,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-59bdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 795b6325-a86a-4023-91cd-f225c7256ed9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3080a601614a744aa67eeb580952b26b749b6e2e2e5a9c550b0465ca946e079,PodSandboxId:6d8cc3d14be2849427cf2320be72359e255ee98a2000b2948e0fb1d49c437a0f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1762593042037288276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zrq2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62157653-cdee-4388-8995-5967ab1c69d0,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7791d1f47c8c2cb28321bdb60d72aac6d79180814dc27a52921bd401d0602d6,PodSandboxId:4ec5d82ffd8dca7fddee38d99dfd43366b7a082a07c851bc597a36784e849eda,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1762593028332835939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-160421,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4f44f332de188d5d4f2cf677863b2e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9113c77e3c48bcdf293a4699e15f3abab44d71090df0fb7370f5cd944f6f0168,PodSandboxId:9c34212c0adbfd991bcbf793dd3a8587209d9bb9f5a2ead86402b1ea8b611705,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1762593028325798072,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-160421,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58613835586725639763354eadcdb1c3,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-por
t\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c76e545ddea653c92937b79a050e57f2b272d131222053054a472bdced2470e,PodSandboxId:76c9467a442679908a51769d213f3b2229126932fb6c65a952f914d0e3e9a381,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1762593028316009312,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-160421,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07e9d676a94ebdd801fdbffd7a
9f55f8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d33f409246215d46cbbffedf52462c032636541138c757cfd54d682688b48f,PodSandboxId:19fe64d121b1556de1a6e84ebed0de271fded205907481f1f7e8c55b9d3e5f4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1762593028304132925,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-schedule
r-addons-160421,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 840db15779a87bc565d6e18b68b6e6ed,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ebbe5c7b-a753-4938-802d-8d15406bea8e name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
35455cc687664 docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 2 minutes ago Running nginx 0 54fcc8a7091b2 nginx
15bdc36ccdefb gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 3 minutes ago Running busybox 0 bc7ad81818dcf busybox
240e71c7b9dd6 registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27 3 minutes ago Running controller 0 9a0bccd23b65e ingress-nginx-controller-6c8bf45fb-tcjs2
97e02ae9d1219 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f 3 minutes ago Exited patch 0 fd1fe8aa67b2b ingress-nginx-admission-patch-qg7b7
ae9aa2310e3e5 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f 3 minutes ago Exited create 0 e67e36c553842 ingress-nginx-admission-create-sg99v
8826d0668013c docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 4 minutes ago Running minikube-ingress-dns 0 1bfd20382cb62 kube-ingress-dns-minikube
af484047be20c docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 58177cfbfd9e2 amd-gpu-device-plugin-sfn75
e0ccd253088f2 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 2dbfd160c0e24 storage-provisioner
26e18d944a5ee 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969 4 minutes ago Running coredns 0 c97f9b14dae37 coredns-66bc5c9577-59bdx
b3080a601614a fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7 4 minutes ago Running kube-proxy 0 6d8cc3d14be28 kube-proxy-zrq2p
e7791d1f47c8c 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115 4 minutes ago Running etcd 0 4ec5d82ffd8dc etcd-addons-160421
9113c77e3c48b c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97 4 minutes ago Running kube-apiserver 0 9c34212c0adbf kube-apiserver-addons-160421
6c76e545ddea6 c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f 4 minutes ago Running kube-controller-manager 0 76c9467a44267 kube-controller-manager-addons-160421
15d33f4092462 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813 4 minutes ago Running kube-scheduler 0 19fe64d121b15 kube-scheduler-addons-160421
==> coredns [26e18d944a5ee49170d7f220c04f42beb90e846fbf6c16142ca326879d9026c2] <==
[INFO] 10.244.0.8:41883 - 1482 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000412365s
[INFO] 10.244.0.8:41883 - 21616 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000117498s
[INFO] 10.244.0.8:41883 - 34154 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000419144s
[INFO] 10.244.0.8:41883 - 58846 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000142107s
[INFO] 10.244.0.8:41883 - 20642 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000120748s
[INFO] 10.244.0.8:41883 - 18421 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000227607s
[INFO] 10.244.0.8:41883 - 32957 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000413704s
[INFO] 10.244.0.8:59394 - 56387 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00034634s
[INFO] 10.244.0.8:59394 - 56074 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.001660991s
[INFO] 10.244.0.8:41771 - 25184 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000698769s
[INFO] 10.244.0.8:41771 - 24888 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00069193s
[INFO] 10.244.0.8:53874 - 10043 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00009994s
[INFO] 10.244.0.8:53874 - 10321 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000342787s
[INFO] 10.244.0.8:43715 - 17698 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00010606s
[INFO] 10.244.0.8:43715 - 17936 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000105874s
[INFO] 10.244.0.23:34858 - 60493 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000544555s
[INFO] 10.244.0.23:46821 - 23980 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000451749s
[INFO] 10.244.0.23:54778 - 26622 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000175681s
[INFO] 10.244.0.23:60869 - 45106 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000256242s
[INFO] 10.244.0.23:36708 - 47309 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000133672s
[INFO] 10.244.0.23:54526 - 50457 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000093154s
[INFO] 10.244.0.23:57867 - 40188 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001839973s
[INFO] 10.244.0.23:60520 - 31412 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.005312291s
[INFO] 10.244.0.28:42271 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000542169s
[INFO] 10.244.0.28:42068 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000173146s
==> describe nodes <==
Name: addons-160421
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-160421
kubernetes.io/os=linux
minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
minikube.k8s.io/name=addons-160421
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_11_08T09_10_34_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-160421
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 08 Nov 2025 09:10:30 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-160421
AcquireTime: <unset>
RenewTime: Sat, 08 Nov 2025 09:15:10 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 08 Nov 2025 09:13:08 +0000 Sat, 08 Nov 2025 09:10:28 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 08 Nov 2025 09:13:08 +0000 Sat, 08 Nov 2025 09:10:28 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 08 Nov 2025 09:13:08 +0000 Sat, 08 Nov 2025 09:10:28 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 08 Nov 2025 09:13:08 +0000 Sat, 08 Nov 2025 09:10:34 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.239
Hostname: addons-160421
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
System Info:
Machine ID: 98a3416f24464b90b3a3e4dcbf116111
System UUID: 98a3416f-2446-4b90-b3a3-e4dcbf116111
Boot ID: 15acb0fc-e6f5-4751-81b9-2cae5b97df04
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.34.1
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (13 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m10s
default hello-world-app-5d498dc89-ncc5m 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m27s
ingress-nginx ingress-nginx-controller-6c8bf45fb-tcjs2 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m30s
kube-system amd-gpu-device-plugin-sfn75 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m37s
kube-system coredns-66bc5c9577-59bdx 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m38s
kube-system etcd-addons-160421 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m44s
kube-system kube-apiserver-addons-160421 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m44s
kube-system kube-controller-manager-addons-160421 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m44s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m33s
kube-system kube-proxy-zrq2p 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m39s
kube-system kube-scheduler-addons-160421 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m44s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m33s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m33s kube-proxy
Normal Starting 4m44s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m44s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m43s kubelet Node addons-160421 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m43s kubelet Node addons-160421 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m43s kubelet Node addons-160421 status is now: NodeHasSufficientPID
Normal NodeReady 4m43s kubelet Node addons-160421 status is now: NodeReady
Normal RegisteredNode 4m40s node-controller Node addons-160421 event: Registered Node addons-160421 in Controller
==> dmesg <==
[ +0.688867] kauditd_printk_skb: 363 callbacks suppressed
[ +0.000044] kauditd_printk_skb: 503 callbacks suppressed
[Nov 8 09:11] kauditd_printk_skb: 140 callbacks suppressed
[ +8.958035] kauditd_printk_skb: 11 callbacks suppressed
[ +5.610754] kauditd_printk_skb: 32 callbacks suppressed
[ +8.165750] kauditd_printk_skb: 26 callbacks suppressed
[ +4.235199] kauditd_printk_skb: 46 callbacks suppressed
[ +5.053103] kauditd_printk_skb: 41 callbacks suppressed
[ +4.761740] kauditd_printk_skb: 106 callbacks suppressed
[ +5.605568] kauditd_printk_skb: 101 callbacks suppressed
[ +0.000048] kauditd_printk_skb: 114 callbacks suppressed
[Nov 8 09:12] kauditd_printk_skb: 68 callbacks suppressed
[ +3.511836] kauditd_printk_skb: 47 callbacks suppressed
[ +9.522661] kauditd_printk_skb: 17 callbacks suppressed
[ +0.001063] kauditd_printk_skb: 22 callbacks suppressed
[ +0.861075] kauditd_printk_skb: 107 callbacks suppressed
[ +0.507002] kauditd_printk_skb: 99 callbacks suppressed
[ +1.000125] kauditd_printk_skb: 55 callbacks suppressed
[ +4.016138] kauditd_printk_skb: 193 callbacks suppressed
[Nov 8 09:13] kauditd_printk_skb: 127 callbacks suppressed
[ +3.042395] kauditd_printk_skb: 30 callbacks suppressed
[ +5.771899] kauditd_printk_skb: 10 callbacks suppressed
[ +0.000217] kauditd_printk_skb: 10 callbacks suppressed
[ +8.551372] kauditd_printk_skb: 41 callbacks suppressed
[Nov 8 09:15] kauditd_printk_skb: 127 callbacks suppressed
==> etcd [e7791d1f47c8c2cb28321bdb60d72aac6d79180814dc27a52921bd401d0602d6] <==
{"level":"info","ts":"2025-11-08T09:11:34.822529Z","caller":"traceutil/trace.go:172","msg":"trace[431118567] transaction","detail":"{read_only:false; response_revision:1004; number_of_response:1; }","duration":"372.999676ms","start":"2025-11-08T09:11:34.449514Z","end":"2025-11-08T09:11:34.822514Z","steps":["trace[431118567] 'process raft request' (duration: 370.376576ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-08T09:11:34.827712Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-08T09:11:34.449493Z","time spent":"373.073169ms","remote":"127.0.0.1:45582","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1001 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
{"level":"info","ts":"2025-11-08T09:11:39.595646Z","caller":"traceutil/trace.go:172","msg":"trace[923485457] linearizableReadLoop","detail":"{readStateIndex:1049; appliedIndex:1049; }","duration":"201.053424ms","start":"2025-11-08T09:11:39.394575Z","end":"2025-11-08T09:11:39.595629Z","steps":["trace[923485457] 'read index received' (duration: 201.047771ms)","trace[923485457] 'applied index is now lower than readState.Index' (duration: 4.663µs)"],"step_count":2}
{"level":"warn","ts":"2025-11-08T09:11:39.595787Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"201.20031ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-08T09:11:39.595825Z","caller":"traceutil/trace.go:172","msg":"trace[537620302] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1018; }","duration":"201.248531ms","start":"2025-11-08T09:11:39.394570Z","end":"2025-11-08T09:11:39.595819Z","steps":["trace[537620302] 'agreement among raft nodes before linearized reading' (duration: 201.171315ms)"],"step_count":1}
{"level":"info","ts":"2025-11-08T09:11:39.595870Z","caller":"traceutil/trace.go:172","msg":"trace[1755884883] transaction","detail":"{read_only:false; response_revision:1019; number_of_response:1; }","duration":"218.158165ms","start":"2025-11-08T09:11:39.377701Z","end":"2025-11-08T09:11:39.595859Z","steps":["trace[1755884883] 'process raft request' (duration: 217.962388ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-08T09:11:39.596037Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.019138ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-08T09:11:39.596083Z","caller":"traceutil/trace.go:172","msg":"trace[792399904] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1019; }","duration":"128.076524ms","start":"2025-11-08T09:11:39.468001Z","end":"2025-11-08T09:11:39.596077Z","steps":["trace[792399904] 'agreement among raft nodes before linearized reading' (duration: 128.006483ms)"],"step_count":1}
{"level":"info","ts":"2025-11-08T09:11:51.507179Z","caller":"traceutil/trace.go:172","msg":"trace[119432246] linearizableReadLoop","detail":"{readStateIndex:1139; appliedIndex:1139; }","duration":"115.828192ms","start":"2025-11-08T09:11:51.391314Z","end":"2025-11-08T09:11:51.507142Z","steps":["trace[119432246] 'read index received' (duration: 115.821358ms)","trace[119432246] 'applied index is now lower than readState.Index' (duration: 5.804µs)"],"step_count":2}
{"level":"warn","ts":"2025-11-08T09:11:51.507396Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"116.085498ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-08T09:11:51.507433Z","caller":"traceutil/trace.go:172","msg":"trace[1756525368] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1106; }","duration":"116.138047ms","start":"2025-11-08T09:11:51.391284Z","end":"2025-11-08T09:11:51.507422Z","steps":["trace[1756525368] 'agreement among raft nodes before linearized reading' (duration: 116.0469ms)"],"step_count":1}
{"level":"info","ts":"2025-11-08T09:11:51.510451Z","caller":"traceutil/trace.go:172","msg":"trace[129799020] transaction","detail":"{read_only:false; response_revision:1107; number_of_response:1; }","duration":"175.088801ms","start":"2025-11-08T09:11:51.335346Z","end":"2025-11-08T09:11:51.510435Z","steps":["trace[129799020] 'process raft request' (duration: 173.026151ms)"],"step_count":1}
{"level":"info","ts":"2025-11-08T09:12:06.020736Z","caller":"traceutil/trace.go:172","msg":"trace[792142060] linearizableReadLoop","detail":"{readStateIndex:1204; appliedIndex:1204; }","duration":"137.671361ms","start":"2025-11-08T09:12:05.883048Z","end":"2025-11-08T09:12:06.020720Z","steps":["trace[792142060] 'read index received' (duration: 137.66529ms)","trace[792142060] 'applied index is now lower than readState.Index' (duration: 4.963µs)"],"step_count":2}
{"level":"warn","ts":"2025-11-08T09:12:06.020848Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"137.78326ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-08T09:12:06.020847Z","caller":"traceutil/trace.go:172","msg":"trace[864686288] transaction","detail":"{read_only:false; response_revision:1169; number_of_response:1; }","duration":"154.017189ms","start":"2025-11-08T09:12:05.866820Z","end":"2025-11-08T09:12:06.020837Z","steps":["trace[864686288] 'process raft request' (duration: 153.922821ms)"],"step_count":1}
{"level":"info","ts":"2025-11-08T09:12:06.020865Z","caller":"traceutil/trace.go:172","msg":"trace[1739323167] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses; range_end:; response_count:0; response_revision:1168; }","duration":"137.817167ms","start":"2025-11-08T09:12:05.883044Z","end":"2025-11-08T09:12:06.020861Z","steps":["trace[1739323167] 'agreement among raft nodes before linearized reading' (duration: 137.761199ms)"],"step_count":1}
{"level":"info","ts":"2025-11-08T09:12:34.161544Z","caller":"traceutil/trace.go:172","msg":"trace[1826073163] transaction","detail":"{read_only:false; response_revision:1348; number_of_response:1; }","duration":"331.438486ms","start":"2025-11-08T09:12:33.830095Z","end":"2025-11-08T09:12:34.161534Z","steps":["trace[1826073163] 'process raft request' (duration: 331.345269ms)"],"step_count":1}
{"level":"info","ts":"2025-11-08T09:12:34.161111Z","caller":"traceutil/trace.go:172","msg":"trace[270034770] linearizableReadLoop","detail":"{readStateIndex:1391; appliedIndex:1391; }","duration":"271.807184ms","start":"2025-11-08T09:12:33.889052Z","end":"2025-11-08T09:12:34.160859Z","steps":["trace[270034770] 'read index received' (duration: 271.797785ms)","trace[270034770] 'applied index is now lower than readState.Index' (duration: 8.141µs)"],"step_count":2}
{"level":"warn","ts":"2025-11-08T09:12:34.161709Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-08T09:12:33.830075Z","time spent":"331.548805ms","remote":"127.0.0.1:45620","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2157,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/default/test-local-path\" mod_revision:1287 > success:<request_put:<key:\"/registry/pods/default/test-local-path\" value_size:2111 >> failure:<request_range:<key:\"/registry/pods/default/test-local-path\" > >"}
{"level":"warn","ts":"2025-11-08T09:12:34.162579Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"273.519163ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" limit:1 ","response":"range_response_count:1 size:1837"}
{"level":"info","ts":"2025-11-08T09:12:34.164210Z","caller":"traceutil/trace.go:172","msg":"trace[754771074] transaction","detail":"{read_only:false; response_revision:1349; number_of_response:1; }","duration":"240.225572ms","start":"2025-11-08T09:12:33.923977Z","end":"2025-11-08T09:12:34.164203Z","steps":["trace[754771074] 'process raft request' (duration: 240.152846ms)"],"step_count":1}
{"level":"info","ts":"2025-11-08T09:12:34.163280Z","caller":"traceutil/trace.go:172","msg":"trace[1122744440] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:1; response_revision:1348; }","duration":"274.228193ms","start":"2025-11-08T09:12:33.889043Z","end":"2025-11-08T09:12:34.163271Z","steps":["trace[1122744440] 'agreement among raft nodes before linearized reading' (duration: 272.59656ms)"],"step_count":1}
{"level":"info","ts":"2025-11-08T09:12:35.622772Z","caller":"traceutil/trace.go:172","msg":"trace[2015697301] transaction","detail":"{read_only:false; response_revision:1356; number_of_response:1; }","duration":"128.982235ms","start":"2025-11-08T09:12:35.493777Z","end":"2025-11-08T09:12:35.622759Z","steps":["trace[2015697301] 'process raft request' (duration: 128.880572ms)"],"step_count":1}
{"level":"info","ts":"2025-11-08T09:12:52.695396Z","caller":"traceutil/trace.go:172","msg":"trace[1213960974] transaction","detail":"{read_only:false; response_revision:1537; number_of_response:1; }","duration":"167.654983ms","start":"2025-11-08T09:12:52.527718Z","end":"2025-11-08T09:12:52.695373Z","steps":["trace[1213960974] 'process raft request' (duration: 167.376549ms)"],"step_count":1}
{"level":"info","ts":"2025-11-08T09:13:00.992307Z","caller":"traceutil/trace.go:172","msg":"trace[1791896210] transaction","detail":"{read_only:false; response_revision:1592; number_of_response:1; }","duration":"179.276235ms","start":"2025-11-08T09:13:00.813011Z","end":"2025-11-08T09:13:00.992287Z","steps":["trace[1791896210] 'process raft request' (duration: 179.133982ms)"],"step_count":1}
==> kernel <==
09:15:17 up 5 min, 0 users, load average: 0.33, 0.97, 0.52
Linux addons-160421 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Nov 1 20:49:51 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [9113c77e3c48bcdf293a4699e15f3abab44d71090df0fb7370f5cd944f6f0168] <==
E1108 09:11:28.801397 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.174.194:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.174.194:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.174.194:443: connect: connection refused" logger="UnhandledError"
E1108 09:11:28.805263 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.174.194:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.174.194:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.174.194:443: connect: connection refused" logger="UnhandledError"
I1108 09:11:28.942693 1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E1108 09:12:18.014466 1 conn.go:339] Error on socket receive: read tcp 192.168.39.239:8443->192.168.39.1:41062: use of closed network connection
E1108 09:12:18.215231 1 conn.go:339] Error on socket receive: read tcp 192.168.39.239:8443->192.168.39.1:41088: use of closed network connection
I1108 09:12:27.619834 1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.249.212"}
I1108 09:12:50.781379 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
I1108 09:12:50.966768 1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.214.107"}
E1108 09:12:59.709813 1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
I1108 09:13:07.361341 1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I1108 09:13:29.837344 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I1108 09:13:34.060392 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1108 09:13:34.060448 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1108 09:13:34.098156 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1108 09:13:34.098207 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1108 09:13:34.114843 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1108 09:13:34.114872 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1108 09:13:34.134887 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1108 09:13:34.134980 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1108 09:13:34.144980 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1108 09:13:34.145021 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W1108 09:13:35.115220 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W1108 09:13:35.145343 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W1108 09:13:35.170128 1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
I1108 09:15:15.964111 1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.110.137"}
==> kube-controller-manager [6c76e545ddea653c92937b79a050e57f2b272d131222053054a472bdced2470e] <==
E1108 09:13:42.598512 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1108 09:13:44.291785 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1108 09:13:44.293197 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1108 09:13:44.583582 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1108 09:13:44.584563 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1108 09:13:54.529240 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1108 09:13:54.530232 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1108 09:13:54.944230 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1108 09:13:54.945347 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1108 09:13:56.095150 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1108 09:13:56.096222 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1108 09:14:10.704253 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1108 09:14:10.705240 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1108 09:14:10.816658 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1108 09:14:10.817585 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1108 09:14:19.135222 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1108 09:14:19.136346 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1108 09:14:39.928328 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1108 09:14:39.929332 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1108 09:14:46.929054 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1108 09:14:46.930629 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1108 09:14:53.049038 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1108 09:14:53.050004 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1108 09:15:17.349234 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1108 09:15:17.350505 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
==> kube-proxy [b3080a601614a744aa67eeb580952b26b749b6e2e2e5a9c550b0465ca946e079] <==
I1108 09:10:43.202690 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1108 09:10:43.302987 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1108 09:10:43.303029 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.239"]
E1108 09:10:43.303105 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1108 09:10:43.392654 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1108 09:10:43.393429 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1108 09:10:43.393534 1 server_linux.go:132] "Using iptables Proxier"
I1108 09:10:43.410680 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1108 09:10:43.411216 1 server.go:527] "Version info" version="v1.34.1"
I1108 09:10:43.411260 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1108 09:10:43.416410 1 config.go:200] "Starting service config controller"
I1108 09:10:43.416593 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1108 09:10:43.416606 1 config.go:106] "Starting endpoint slice config controller"
I1108 09:10:43.416609 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1108 09:10:43.416773 1 config.go:403] "Starting serviceCIDR config controller"
I1108 09:10:43.416793 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1108 09:10:43.419146 1 config.go:309] "Starting node config controller"
I1108 09:10:43.419168 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1108 09:10:43.517258 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1108 09:10:43.517297 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1108 09:10:43.517272 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1108 09:10:43.521695 1 shared_informer.go:356] "Caches are synced" controller="node config"
==> kube-scheduler [15d33f409246215d46cbbffedf52462c032636541138c757cfd54d682688b48f] <==
E1108 09:10:30.970263 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1108 09:10:30.970322 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1108 09:10:30.970207 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1108 09:10:30.972172 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1108 09:10:30.974692 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1108 09:10:30.974778 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1108 09:10:30.975285 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1108 09:10:30.975388 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1108 09:10:30.975476 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1108 09:10:31.801439 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1108 09:10:31.803736 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1108 09:10:31.837772 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1108 09:10:31.842643 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1108 09:10:31.869494 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1108 09:10:31.889396 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1108 09:10:31.909085 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1108 09:10:31.909497 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1108 09:10:31.917203 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1108 09:10:31.919460 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1108 09:10:31.991159 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1108 09:10:32.045823 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1108 09:10:32.198800 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1108 09:10:32.200906 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1108 09:10:32.385839 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
I1108 09:10:35.561902 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Nov 08 09:13:37 addons-160421 kubelet[1497]: I1108 09:13:37.860907 1497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c456b782-c4ad-441a-add3-510c4cd0e41e" path="/var/lib/kubelet/pods/c456b782-c4ad-441a-add3-510c4cd0e41e/volumes"
Nov 08 09:13:37 addons-160421 kubelet[1497]: I1108 09:13:37.863200 1497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd32b420-8ec1-4e9d-913b-33c360e14624" path="/var/lib/kubelet/pods/fd32b420-8ec1-4e9d-913b-33c360e14624/volumes"
Nov 08 09:13:44 addons-160421 kubelet[1497]: E1108 09:13:44.326279 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762593224325703179 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 08 09:13:44 addons-160421 kubelet[1497]: E1108 09:13:44.326302 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762593224325703179 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 08 09:13:54 addons-160421 kubelet[1497]: E1108 09:13:54.329375 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762593234328991982 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 08 09:13:54 addons-160421 kubelet[1497]: E1108 09:13:54.329811 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762593234328991982 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 08 09:14:04 addons-160421 kubelet[1497]: E1108 09:14:04.332867 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762593244332413143 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 08 09:14:04 addons-160421 kubelet[1497]: E1108 09:14:04.332948 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762593244332413143 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 08 09:14:14 addons-160421 kubelet[1497]: E1108 09:14:14.335980 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762593254335554966 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 08 09:14:14 addons-160421 kubelet[1497]: E1108 09:14:14.336047 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762593254335554966 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 08 09:14:24 addons-160421 kubelet[1497]: E1108 09:14:24.339167 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762593264338621083 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 08 09:14:24 addons-160421 kubelet[1497]: E1108 09:14:24.339645 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762593264338621083 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 08 09:14:31 addons-160421 kubelet[1497]: I1108 09:14:31.854331 1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Nov 08 09:14:34 addons-160421 kubelet[1497]: E1108 09:14:34.343192 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762593274342729352 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 08 09:14:34 addons-160421 kubelet[1497]: E1108 09:14:34.343218 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762593274342729352 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 08 09:14:44 addons-160421 kubelet[1497]: E1108 09:14:44.347465 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762593284346972371 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 08 09:14:44 addons-160421 kubelet[1497]: E1108 09:14:44.347510 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762593284346972371 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 08 09:14:46 addons-160421 kubelet[1497]: I1108 09:14:46.854145 1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-sfn75" secret="" err="secret \"gcp-auth\" not found"
Nov 08 09:14:54 addons-160421 kubelet[1497]: E1108 09:14:54.350637 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762593294350178070 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 08 09:14:54 addons-160421 kubelet[1497]: E1108 09:14:54.350669 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762593294350178070 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 08 09:15:04 addons-160421 kubelet[1497]: E1108 09:15:04.353746 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762593304353167702 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 08 09:15:04 addons-160421 kubelet[1497]: E1108 09:15:04.353773 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762593304353167702 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 08 09:15:14 addons-160421 kubelet[1497]: E1108 09:15:14.356664 1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1762593314356219627 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 08 09:15:14 addons-160421 kubelet[1497]: E1108 09:15:14.356707 1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1762593314356219627 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 08 09:15:15 addons-160421 kubelet[1497]: I1108 09:15:15.955216 1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4hxk\" (UniqueName: \"kubernetes.io/projected/d0ed5d29-a9e9-4641-9906-1c21bdbede9c-kube-api-access-k4hxk\") pod \"hello-world-app-5d498dc89-ncc5m\" (UID: \"d0ed5d29-a9e9-4641-9906-1c21bdbede9c\") " pod="default/hello-world-app-5d498dc89-ncc5m"
==> storage-provisioner [e0ccd253088f2367b00cf781bc8d42402a6ff98e4e500f3353333877a9d0ae02] <==
W1108 09:14:52.207877 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1108 09:14:54.212445 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1108 09:14:54.218846 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1108 09:14:56.222333 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1108 09:14:56.230947 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1108 09:14:58.234492 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1108 09:14:58.239711 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1108 09:15:00.243201 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1108 09:15:00.251224 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1108 09:15:02.254741 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1108 09:15:02.260239 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1108 09:15:04.264018 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1108 09:15:04.269978 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1108 09:15:06.272772 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1108 09:15:06.280215 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1108 09:15:08.283421 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1108 09:15:08.289118 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1108 09:15:10.293302 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1108 09:15:10.299326 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1108 09:15:12.302728 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1108 09:15:12.307777 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1108 09:15:14.311235 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1108 09:15:14.317832 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1108 09:15:16.323602 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1108 09:15:16.338267 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-160421 -n addons-160421
helpers_test.go:269: (dbg) Run: kubectl --context addons-160421 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-ncc5m ingress-nginx-admission-create-sg99v ingress-nginx-admission-patch-qg7b7
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context addons-160421 describe pod hello-world-app-5d498dc89-ncc5m ingress-nginx-admission-create-sg99v ingress-nginx-admission-patch-qg7b7
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-160421 describe pod hello-world-app-5d498dc89-ncc5m ingress-nginx-admission-create-sg99v ingress-nginx-admission-patch-qg7b7: exit status 1 (71.322383ms)
-- stdout --
Name: hello-world-app-5d498dc89-ncc5m
Namespace: default
Priority: 0
Service Account: default
Node: addons-160421/192.168.39.239
Start Time: Sat, 08 Nov 2025 09:15:15 +0000
Labels: app=hello-world-app
pod-template-hash=5d498dc89
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-world-app-5d498dc89
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k4hxk (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-k4hxk:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3s default-scheduler Successfully assigned default/hello-world-app-5d498dc89-ncc5m to addons-160421
Normal Pulling 2s kubelet Pulling image "docker.io/kicbase/echo-server:1.0"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-sg99v" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-qg7b7" not found
** /stderr **
helpers_test.go:287: kubectl --context addons-160421 describe pod hello-world-app-5d498dc89-ncc5m ingress-nginx-admission-create-sg99v ingress-nginx-admission-patch-qg7b7: exit status 1
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-160421 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-160421 addons disable ingress-dns --alsologtostderr -v=1: (1.413786984s)
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-160421 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-160421 addons disable ingress --alsologtostderr -v=1: (7.687694151s)
--- FAIL: TestAddons/parallel/Ingress (156.75s)