=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run: kubectl --context addons-266876 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run: kubectl --context addons-266876 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run: kubectl --context addons-266876 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [9e3ed8c7-5788-4d41-aba1-71043fc65fb1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [9e3ed8c7-5788-4d41-aba1-71043fc65fb1] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003799602s
I1121 23:49:27.749455 250664 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run: out/minikube-linux-amd64 -p addons-266876 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-266876 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m16.490316003s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run: kubectl --context addons-266876 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run: out/minikube-linux-amd64 -p addons-266876 ip
addons_test.go:299: (dbg) Run: nslookup hello-john.test 192.168.39.50
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-266876 -n addons-266876
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p addons-266876 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-266876 logs -n 25: (1.345055079s)
helpers_test.go:260: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ start │ -o=json --download-only -p download-only-263491 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2 --container-runtime=crio │ download-only-263491 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ │
│ delete │ --all │ minikube │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
│ delete │ -p download-only-263491 │ download-only-263491 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
│ delete │ -p download-only-246895 │ download-only-246895 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
│ delete │ -p download-only-263491 │ download-only-263491 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
│ start │ --download-only -p binary-mirror-996598 --alsologtostderr --binary-mirror http://127.0.0.1:41123 --driver=kvm2 --container-runtime=crio │ binary-mirror-996598 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ │
│ delete │ -p binary-mirror-996598 │ binary-mirror-996598 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
│ addons │ enable dashboard -p addons-266876 │ addons-266876 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ │
│ addons │ disable dashboard -p addons-266876 │ addons-266876 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ │
│ start │ -p addons-266876 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-266876 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:48 UTC │
│ addons │ addons-266876 addons disable volcano --alsologtostderr -v=1 │ addons-266876 │ jenkins │ v1.37.0 │ 21 Nov 25 23:48 UTC │ 21 Nov 25 23:48 UTC │
│ addons │ addons-266876 addons disable gcp-auth --alsologtostderr -v=1 │ addons-266876 │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
│ addons │ enable headlamp -p addons-266876 --alsologtostderr -v=1 │ addons-266876 │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
│ addons │ addons-266876 addons disable metrics-server --alsologtostderr -v=1 │ addons-266876 │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-266876 │ addons-266876 │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
│ addons │ addons-266876 addons disable registry-creds --alsologtostderr -v=1 │ addons-266876 │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
│ ssh │ addons-266876 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-266876 │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ │
│ addons │ addons-266876 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-266876 │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
│ ip │ addons-266876 ip │ addons-266876 │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
│ addons │ addons-266876 addons disable registry --alsologtostderr -v=1 │ addons-266876 │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
│ addons │ addons-266876 addons disable headlamp --alsologtostderr -v=1 │ addons-266876 │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
│ addons │ addons-266876 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-266876 │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
│ addons │ addons-266876 addons disable yakd --alsologtostderr -v=1 │ addons-266876 │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
│ addons │ addons-266876 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-266876 │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
│ ip │ addons-266876 ip │ addons-266876 │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │ 21 Nov 25 23:51 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/11/21 23:46:48
Running on machine: ubuntu-20-agent-13
Binary: Built with gc go1.25.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1121 23:46:48.131095 251263 out.go:360] Setting OutFile to fd 1 ...
I1121 23:46:48.131340 251263 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:46:48.131350 251263 out.go:374] Setting ErrFile to fd 2...
I1121 23:46:48.131354 251263 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:46:48.131528 251263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
I1121 23:46:48.132085 251263 out.go:368] Setting JSON to false
I1121 23:46:48.132905 251263 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":26936,"bootTime":1763741872,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1121 23:46:48.132973 251263 start.go:143] virtualization: kvm guest
I1121 23:46:48.134971 251263 out.go:179] * [addons-266876] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1121 23:46:48.136184 251263 notify.go:221] Checking for updates...
I1121 23:46:48.136230 251263 out.go:179] - MINIKUBE_LOCATION=21934
I1121 23:46:48.137505 251263 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1121 23:46:48.138918 251263 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21934-244751/kubeconfig
I1121 23:46:48.140232 251263 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-244751/.minikube
I1121 23:46:48.141364 251263 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1121 23:46:48.142744 251263 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1121 23:46:48.144346 251263 driver.go:422] Setting default libvirt URI to qemu:///system
I1121 23:46:48.178112 251263 out.go:179] * Using the kvm2 driver based on user configuration
I1121 23:46:48.179144 251263 start.go:309] selected driver: kvm2
I1121 23:46:48.179156 251263 start.go:930] validating driver "kvm2" against <nil>
I1121 23:46:48.179168 251263 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1121 23:46:48.179919 251263 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1121 23:46:48.180166 251263 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1121 23:46:48.180191 251263 cni.go:84] Creating CNI manager for ""
I1121 23:46:48.180267 251263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1121 23:46:48.180276 251263 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1121 23:46:48.180323 251263 start.go:353] cluster config:
{Name:addons-266876 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-266876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
I1121 23:46:48.180438 251263 iso.go:125] acquiring lock: {Name:mkc83d3435f1eaa5a92358fc78f85b7d74048deb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1121 23:46:48.181860 251263 out.go:179] * Starting "addons-266876" primary control-plane node in "addons-266876" cluster
I1121 23:46:48.182929 251263 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1121 23:46:48.182959 251263 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-244751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
I1121 23:46:48.182976 251263 cache.go:65] Caching tarball of preloaded images
I1121 23:46:48.183059 251263 preload.go:238] Found /home/jenkins/minikube-integration/21934-244751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1121 23:46:48.183069 251263 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
I1121 23:46:48.183354 251263 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/config.json ...
I1121 23:46:48.183376 251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/config.json: {Name:mk0295453cd01463fa22b5d6c7388981c204c24d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1121 23:46:48.183507 251263 start.go:360] acquireMachinesLock for addons-266876: {Name:mk0193f6f5636a08cc7939c91649c5870e7698fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1121 23:46:48.183552 251263 start.go:364] duration metric: took 33.297µs to acquireMachinesLock for "addons-266876"
I1121 23:46:48.183570 251263 start.go:93] Provisioning new machine with config: &{Name:addons-266876 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-266876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I1121 23:46:48.183614 251263 start.go:125] createHost starting for "" (driver="kvm2")
I1121 23:46:48.185254 251263 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1121 23:46:48.185412 251263 start.go:159] libmachine.API.Create for "addons-266876" (driver="kvm2")
I1121 23:46:48.185441 251263 client.go:173] LocalClient.Create starting
I1121 23:46:48.185543 251263 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem
I1121 23:46:48.249364 251263 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/cert.pem
I1121 23:46:48.566610 251263 main.go:143] libmachine: creating domain...
I1121 23:46:48.566636 251263 main.go:143] libmachine: creating network...
I1121 23:46:48.568191 251263 main.go:143] libmachine: found existing default network
I1121 23:46:48.568404 251263 main.go:143] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1121 23:46:48.568892 251263 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e90440}
I1121 23:46:48.569009 251263 main.go:143] libmachine: defining private network:
<network>
<name>mk-addons-266876</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1121 23:46:48.575044 251263 main.go:143] libmachine: creating private network mk-addons-266876 192.168.39.0/24...
I1121 23:46:48.645727 251263 main.go:143] libmachine: private network mk-addons-266876 192.168.39.0/24 created
I1121 23:46:48.646042 251263 main.go:143] libmachine: <network>
<name>mk-addons-266876</name>
<uuid>c503bc44-d3ea-47cf-b120-da4593d18380</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:80:0f:c2'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1121 23:46:48.646078 251263 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876 ...
I1121 23:46:48.646103 251263 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21934-244751/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
I1121 23:46:48.646114 251263 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21934-244751/.minikube
I1121 23:46:48.646192 251263 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21934-244751/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21934-244751/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso...
I1121 23:46:48.924945 251263 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa...
I1121 23:46:48.947251 251263 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/addons-266876.rawdisk...
I1121 23:46:48.947299 251263 main.go:143] libmachine: Writing magic tar header
I1121 23:46:48.947321 251263 main.go:143] libmachine: Writing SSH key tar header
I1121 23:46:48.947404 251263 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876 ...
I1121 23:46:48.947463 251263 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876
I1121 23:46:48.947488 251263 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876 (perms=drwx------)
I1121 23:46:48.947500 251263 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21934-244751/.minikube/machines
I1121 23:46:48.947510 251263 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21934-244751/.minikube/machines (perms=drwxr-xr-x)
I1121 23:46:48.947521 251263 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21934-244751/.minikube
I1121 23:46:48.947528 251263 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21934-244751/.minikube (perms=drwxr-xr-x)
I1121 23:46:48.947540 251263 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21934-244751
I1121 23:46:48.947549 251263 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21934-244751 (perms=drwxrwxr-x)
I1121 23:46:48.947562 251263 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1121 23:46:48.947572 251263 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1121 23:46:48.947579 251263 main.go:143] libmachine: checking permissions on dir: /home/jenkins
I1121 23:46:48.947589 251263 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1121 23:46:48.947600 251263 main.go:143] libmachine: checking permissions on dir: /home
I1121 23:46:48.947606 251263 main.go:143] libmachine: skipping /home - not owner
I1121 23:46:48.947613 251263 main.go:143] libmachine: defining domain...
I1121 23:46:48.949155 251263 main.go:143] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-266876</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/addons-266876.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-266876'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1121 23:46:48.954504 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:cb:01:39 in network default
I1121 23:46:48.955203 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:46:48.955226 251263 main.go:143] libmachine: starting domain...
I1121 23:46:48.955230 251263 main.go:143] libmachine: ensuring networks are active...
I1121 23:46:48.956075 251263 main.go:143] libmachine: Ensuring network default is active
I1121 23:46:48.956468 251263 main.go:143] libmachine: Ensuring network mk-addons-266876 is active
I1121 23:46:48.957054 251263 main.go:143] libmachine: getting domain XML...
I1121 23:46:48.958124 251263 main.go:143] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-266876</name>
<uuid>c4a95d5c-2715-4bec-8bc2-a50909bf4217</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/addons-266876.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:ab:5a:31'/>
<source network='mk-addons-266876'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:cb:01:39'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1121 23:46:50.230732 251263 main.go:143] libmachine: waiting for domain to start...
I1121 23:46:50.232398 251263 main.go:143] libmachine: domain is now running
I1121 23:46:50.232423 251263 main.go:143] libmachine: waiting for IP...
I1121 23:46:50.233366 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:46:50.234245 251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
I1121 23:46:50.234266 251263 main.go:143] libmachine: trying to list again with source=arp
I1121 23:46:50.234594 251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
I1121 23:46:50.234654 251263 retry.go:31] will retry after 291.794239ms: waiting for domain to come up
I1121 23:46:50.528283 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:46:50.528971 251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
I1121 23:46:50.528987 251263 main.go:143] libmachine: trying to list again with source=arp
I1121 23:46:50.529342 251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
I1121 23:46:50.529380 251263 retry.go:31] will retry after 351.305248ms: waiting for domain to come up
I1121 23:46:50.882166 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:46:50.883099 251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
I1121 23:46:50.883122 251263 main.go:143] libmachine: trying to list again with source=arp
I1121 23:46:50.883485 251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
I1121 23:46:50.883531 251263 retry.go:31] will retry after 364.129033ms: waiting for domain to come up
I1121 23:46:51.249389 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:46:51.250192 251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
I1121 23:46:51.250210 251263 main.go:143] libmachine: trying to list again with source=arp
I1121 23:46:51.250511 251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
I1121 23:46:51.250562 251263 retry.go:31] will retry after 385.747401ms: waiting for domain to come up
I1121 23:46:51.638320 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:46:51.639301 251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
I1121 23:46:51.639319 251263 main.go:143] libmachine: trying to list again with source=arp
I1121 23:46:51.639704 251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
I1121 23:46:51.639759 251263 retry.go:31] will retry after 745.315642ms: waiting for domain to come up
I1121 23:46:52.386579 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:46:52.387430 251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
I1121 23:46:52.387444 251263 main.go:143] libmachine: trying to list again with source=arp
I1121 23:46:52.387845 251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
I1121 23:46:52.387891 251263 retry.go:31] will retry after 692.465755ms: waiting for domain to come up
I1121 23:46:53.081995 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:46:53.082882 251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
I1121 23:46:53.082899 251263 main.go:143] libmachine: trying to list again with source=arp
I1121 23:46:53.083254 251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
I1121 23:46:53.083289 251263 retry.go:31] will retry after 879.261574ms: waiting for domain to come up
I1121 23:46:53.964041 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:46:53.964752 251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
I1121 23:46:53.964779 251263 main.go:143] libmachine: trying to list again with source=arp
I1121 23:46:53.965086 251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
I1121 23:46:53.965141 251263 retry.go:31] will retry after 1.461085566s: waiting for domain to come up
I1121 23:46:55.428870 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:46:55.429589 251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
I1121 23:46:55.429605 251263 main.go:143] libmachine: trying to list again with source=arp
I1121 23:46:55.429939 251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
I1121 23:46:55.429981 251263 retry.go:31] will retry after 1.78072773s: waiting for domain to come up
I1121 23:46:57.213143 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:46:57.213941 251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
I1121 23:46:57.213961 251263 main.go:143] libmachine: trying to list again with source=arp
I1121 23:46:57.214320 251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
I1121 23:46:57.214355 251263 retry.go:31] will retry after 1.504173315s: waiting for domain to come up
I1121 23:46:58.719849 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:46:58.720746 251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
I1121 23:46:58.720770 251263 main.go:143] libmachine: trying to list again with source=arp
I1121 23:46:58.721137 251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
I1121 23:46:58.721173 251263 retry.go:31] will retry after 2.875642747s: waiting for domain to come up
I1121 23:47:01.600296 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:01.600945 251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
I1121 23:47:01.600961 251263 main.go:143] libmachine: trying to list again with source=arp
I1121 23:47:01.601274 251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
I1121 23:47:01.601321 251263 retry.go:31] will retry after 3.623260763s: waiting for domain to come up
I1121 23:47:05.227711 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:05.228458 251263 main.go:143] libmachine: domain addons-266876 has current primary IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:05.228475 251263 main.go:143] libmachine: found domain IP: 192.168.39.50
I1121 23:47:05.228486 251263 main.go:143] libmachine: reserving static IP address...
I1121 23:47:05.229043 251263 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-266876", mac: "52:54:00:ab:5a:31", ip: "192.168.39.50"} in network mk-addons-266876
I1121 23:47:05.530130 251263 main.go:143] libmachine: reserved static IP address 192.168.39.50 for domain addons-266876
I1121 23:47:05.530160 251263 main.go:143] libmachine: waiting for SSH...
I1121 23:47:05.530169 251263 main.go:143] libmachine: Getting to WaitForSSH function...
I1121 23:47:05.533988 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:05.534529 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:05.534565 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:05.534795 251263 main.go:143] libmachine: Using SSH client type: native
I1121 23:47:05.535088 251263 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.50 22 <nil> <nil>}
I1121 23:47:05.535104 251263 main.go:143] libmachine: About to run SSH command:
exit 0
I1121 23:47:05.657550 251263 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1121 23:47:05.657963 251263 main.go:143] libmachine: domain creation complete
I1121 23:47:05.659772 251263 machine.go:94] provisionDockerMachine start ...
I1121 23:47:05.662740 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:05.663237 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:05.663263 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:05.663525 251263 main.go:143] libmachine: Using SSH client type: native
I1121 23:47:05.663805 251263 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.50 22 <nil> <nil>}
I1121 23:47:05.663820 251263 main.go:143] libmachine: About to run SSH command:
hostname
I1121 23:47:05.773778 251263 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
I1121 23:47:05.773809 251263 buildroot.go:166] provisioning hostname "addons-266876"
I1121 23:47:05.777397 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:05.777855 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:05.777881 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:05.778090 251263 main.go:143] libmachine: Using SSH client type: native
I1121 23:47:05.778347 251263 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.50 22 <nil> <nil>}
I1121 23:47:05.778362 251263 main.go:143] libmachine: About to run SSH command:
sudo hostname addons-266876 && echo "addons-266876" | sudo tee /etc/hostname
I1121 23:47:05.904549 251263 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-266876
I1121 23:47:05.907947 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:05.908399 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:05.908428 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:05.908637 251263 main.go:143] libmachine: Using SSH client type: native
I1121 23:47:05.908909 251263 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.50 22 <nil> <nil>}
I1121 23:47:05.908934 251263 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-266876' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-266876/g' /etc/hosts;
else
echo '127.0.1.1 addons-266876' | sudo tee -a /etc/hosts;
fi
fi
I1121 23:47:06.027505 251263 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1121 23:47:06.027542 251263 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21934-244751/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-244751/.minikube}
I1121 23:47:06.027606 251263 buildroot.go:174] setting up certificates
I1121 23:47:06.027620 251263 provision.go:84] configureAuth start
I1121 23:47:06.030823 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:06.031234 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:06.031255 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:06.033405 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:06.033742 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:06.033761 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:06.033873 251263 provision.go:143] copyHostCerts
I1121 23:47:06.033958 251263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-244751/.minikube/key.pem (1679 bytes)
I1121 23:47:06.034087 251263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-244751/.minikube/ca.pem (1078 bytes)
I1121 23:47:06.034147 251263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-244751/.minikube/cert.pem (1123 bytes)
I1121 23:47:06.034206 251263 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-244751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca-key.pem org=jenkins.addons-266876 san=[127.0.0.1 192.168.39.50 addons-266876 localhost minikube]
I1121 23:47:06.088178 251263 provision.go:177] copyRemoteCerts
I1121 23:47:06.088255 251263 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1121 23:47:06.090836 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:06.091229 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:06.091259 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:06.091419 251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
I1121 23:47:06.177697 251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1121 23:47:06.208945 251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1121 23:47:06.240002 251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1121 23:47:06.271424 251263 provision.go:87] duration metric: took 243.786645ms to configureAuth
I1121 23:47:06.271463 251263 buildroot.go:189] setting minikube options for container-runtime
I1121 23:47:06.271718 251263 config.go:182] Loaded profile config "addons-266876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 23:47:06.275170 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:06.275691 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:06.275730 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:06.276021 251263 main.go:143] libmachine: Using SSH client type: native
I1121 23:47:06.276275 251263 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.50 22 <nil> <nil>}
I1121 23:47:06.276292 251263 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1121 23:47:06.522993 251263 main.go:143] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1121 23:47:06.523024 251263 machine.go:97] duration metric: took 863.230308ms to provisionDockerMachine
I1121 23:47:06.523034 251263 client.go:176] duration metric: took 18.337586387s to LocalClient.Create
I1121 23:47:06.523056 251263 start.go:167] duration metric: took 18.337642424s to libmachine.API.Create "addons-266876"
I1121 23:47:06.523067 251263 start.go:293] postStartSetup for "addons-266876" (driver="kvm2")
I1121 23:47:06.523080 251263 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1121 23:47:06.523174 251263 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1121 23:47:06.526182 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:06.526662 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:06.526701 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:06.526857 251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
I1121 23:47:06.616570 251263 ssh_runner.go:195] Run: cat /etc/os-release
I1121 23:47:06.622182 251263 info.go:137] Remote host: Buildroot 2025.02
I1121 23:47:06.622217 251263 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-244751/.minikube/addons for local assets ...
I1121 23:47:06.622288 251263 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-244751/.minikube/files for local assets ...
I1121 23:47:06.622311 251263 start.go:296] duration metric: took 99.238343ms for postStartSetup
I1121 23:47:06.625431 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:06.626043 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:06.626079 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:06.626664 251263 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/config.json ...
I1121 23:47:06.626937 251263 start.go:128] duration metric: took 18.44331085s to createHost
I1121 23:47:06.629842 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:06.630374 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:06.630404 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:06.630671 251263 main.go:143] libmachine: Using SSH client type: native
I1121 23:47:06.630883 251263 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.50 22 <nil> <nil>}
I1121 23:47:06.630893 251263 main.go:143] libmachine: About to run SSH command:
date +%s.%N
I1121 23:47:06.742838 251263 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763768826.701122136
I1121 23:47:06.742869 251263 fix.go:216] guest clock: 1763768826.701122136
I1121 23:47:06.742878 251263 fix.go:229] Guest: 2025-11-21 23:47:06.701122136 +0000 UTC Remote: 2025-11-21 23:47:06.626948375 +0000 UTC m=+18.545515405 (delta=74.173761ms)
I1121 23:47:06.742897 251263 fix.go:200] guest clock delta is within tolerance: 74.173761ms
I1121 23:47:06.742902 251263 start.go:83] releasing machines lock for "addons-266876", held for 18.559341059s
I1121 23:47:06.745883 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:06.746295 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:06.746321 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:06.746833 251263 ssh_runner.go:195] Run: cat /version.json
I1121 23:47:06.746947 251263 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1121 23:47:06.750243 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:06.750247 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:06.750776 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:06.750809 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:06.750823 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:06.750856 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:06.751031 251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
I1121 23:47:06.751199 251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
I1121 23:47:06.830906 251263 ssh_runner.go:195] Run: systemctl --version
I1121 23:47:06.862977 251263 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1121 23:47:07.024839 251263 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1121 23:47:07.032647 251263 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1121 23:47:07.032771 251263 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1121 23:47:07.054527 251263 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1121 23:47:07.054564 251263 start.go:496] detecting cgroup driver to use...
I1121 23:47:07.054645 251263 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1121 23:47:07.075688 251263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1121 23:47:07.094661 251263 docker.go:218] disabling cri-docker service (if available) ...
I1121 23:47:07.094747 251263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1121 23:47:07.112602 251263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1121 23:47:07.129177 251263 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1121 23:47:07.274890 251263 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1121 23:47:07.492757 251263 docker.go:234] disabling docker service ...
I1121 23:47:07.492831 251263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1121 23:47:07.510021 251263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1121 23:47:07.525620 251263 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1121 23:47:07.675935 251263 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1121 23:47:07.820400 251263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1121 23:47:07.837622 251263 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1121 23:47:07.861864 251263 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1121 23:47:07.861942 251263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1121 23:47:07.875198 251263 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1121 23:47:07.875282 251263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1121 23:47:07.889198 251263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1121 23:47:07.902595 251263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1121 23:47:07.915879 251263 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1121 23:47:07.929954 251263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1121 23:47:07.943664 251263 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1121 23:47:07.965719 251263 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1121 23:47:07.978868 251263 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1121 23:47:07.991074 251263 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1121 23:47:07.991144 251263 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1121 23:47:08.015804 251263 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1121 23:47:08.029594 251263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1121 23:47:08.172544 251263 ssh_runner.go:195] Run: sudo systemctl restart crio
I1121 23:47:08.286465 251263 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1121 23:47:08.286546 251263 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1121 23:47:08.292422 251263 start.go:564] Will wait 60s for crictl version
I1121 23:47:08.292523 251263 ssh_runner.go:195] Run: which crictl
I1121 23:47:08.297252 251263 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1121 23:47:08.333825 251263 start.go:580] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1121 23:47:08.333924 251263 ssh_runner.go:195] Run: crio --version
I1121 23:47:08.364777 251263 ssh_runner.go:195] Run: crio --version
I1121 23:47:08.397593 251263 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
I1121 23:47:08.401817 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:08.402315 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:08.402343 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:08.402614 251263 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1121 23:47:08.408058 251263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1121 23:47:08.427560 251263 kubeadm.go:884] updating cluster {Name:addons-266876 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-266876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1121 23:47:08.427708 251263 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1121 23:47:08.427752 251263 ssh_runner.go:195] Run: sudo crictl images --output json
I1121 23:47:08.466046 251263 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
I1121 23:47:08.466131 251263 ssh_runner.go:195] Run: which lz4
I1121 23:47:08.471268 251263 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1121 23:47:08.476699 251263 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1121 23:47:08.476733 251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
I1121 23:47:10.046904 251263 crio.go:462] duration metric: took 1.575665951s to copy over tarball
I1121 23:47:10.046997 251263 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1121 23:47:11.663077 251263 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.616046572s)
I1121 23:47:11.663118 251263 crio.go:469] duration metric: took 1.616181048s to extract the tarball
I1121 23:47:11.663129 251263 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1121 23:47:11.705893 251263 ssh_runner.go:195] Run: sudo crictl images --output json
I1121 23:47:11.746467 251263 crio.go:514] all images are preloaded for cri-o runtime.
I1121 23:47:11.746493 251263 cache_images.go:86] Images are preloaded, skipping loading
I1121 23:47:11.746502 251263 kubeadm.go:935] updating node { 192.168.39.50 8443 v1.34.1 crio true true} ...
I1121 23:47:11.746609 251263 kubeadm.go:947] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-266876 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.50
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:addons-266876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1121 23:47:11.746698 251263 ssh_runner.go:195] Run: crio config
I1121 23:47:11.795708 251263 cni.go:84] Creating CNI manager for ""
I1121 23:47:11.795739 251263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1121 23:47:11.795759 251263 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1121 23:47:11.795781 251263 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.50 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-266876 NodeName:addons-266876 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1121 23:47:11.795901 251263 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.50
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-266876"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.50"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.50"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1121 23:47:11.795977 251263 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1121 23:47:11.808516 251263 binaries.go:51] Found k8s binaries, skipping transfer
I1121 23:47:11.808581 251263 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1121 23:47:11.820622 251263 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I1121 23:47:11.842831 251263 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1121 23:47:11.864556 251263 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
I1121 23:47:11.887018 251263 ssh_runner.go:195] Run: grep 192.168.39.50 control-plane.minikube.internal$ /etc/hosts
I1121 23:47:11.891743 251263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.50 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1121 23:47:11.907140 251263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1121 23:47:12.050500 251263 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1121 23:47:12.084445 251263 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876 for IP: 192.168.39.50
I1121 23:47:12.084477 251263 certs.go:195] generating shared ca certs ...
I1121 23:47:12.084503 251263 certs.go:227] acquiring lock for ca certs: {Name:mk43fa762c6315605485300e07cf83f8f357f8dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1121 23:47:12.084733 251263 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-244751/.minikube/ca.key
I1121 23:47:12.219080 251263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-244751/.minikube/ca.crt ...
I1121 23:47:12.219114 251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/ca.crt: {Name:mk4ab860b5f00eeacc7d5a064e6b8682b8350cc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1121 23:47:12.219328 251263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-244751/.minikube/ca.key ...
I1121 23:47:12.219350 251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/ca.key: {Name:mkd33a6a072a0fb7cb39783adfcb9f792da25f35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1121 23:47:12.219466 251263 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.key
I1121 23:47:12.275894 251263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.crt ...
I1121 23:47:12.275930 251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.crt: {Name:mk4874a4ae2a76e1a44a3b81a6402bcd1f4b9663 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1121 23:47:12.276126 251263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.key ...
I1121 23:47:12.276145 251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.key: {Name:mk1d8c1db5a8f9f2ab09a6bc1211706c413d6bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1121 23:47:12.276291 251263 certs.go:257] generating profile certs ...
I1121 23:47:12.276376 251263 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.key
I1121 23:47:12.276402 251263 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt with IP's: []
I1121 23:47:12.405508 251263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt ...
I1121 23:47:12.405541 251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: {Name:mkcc0d2bdbfeba71ea1f4e63e41e1151d9d382ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1121 23:47:12.405791 251263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.key ...
I1121 23:47:12.405812 251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.key: {Name:mk1d82213fc29dcec5419cdd18c321f7613a56e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1121 23:47:12.405953 251263 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.key.8d7367ca
I1121 23:47:12.405982 251263 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.crt.8d7367ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.50]
I1121 23:47:12.443135 251263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.crt.8d7367ca ...
I1121 23:47:12.443162 251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.crt.8d7367ca: {Name:mk318161f2384c8556874dd6e6e5fc8eee5c9cdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1121 23:47:12.443363 251263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.key.8d7367ca ...
I1121 23:47:12.443385 251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.key.8d7367ca: {Name:mke2fa439b03069f58550af68f202fe26e9c97ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1121 23:47:12.443489 251263 certs.go:382] copying /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.crt.8d7367ca -> /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.crt
I1121 23:47:12.443595 251263 certs.go:386] copying /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.key.8d7367ca -> /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.key
I1121 23:47:12.443670 251263 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.key
I1121 23:47:12.443705 251263 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.crt with IP's: []
I1121 23:47:12.603488 251263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.crt ...
I1121 23:47:12.603520 251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.crt: {Name:mk795b280bcd9c59cf78ec03ece9d4b0753eaaa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1121 23:47:12.603755 251263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.key ...
I1121 23:47:12.603779 251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.key: {Name:mkfe4eecc4523b56c0d41272318c6e77ecb4dd52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1121 23:47:12.604032 251263 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca-key.pem (1675 bytes)
I1121 23:47:12.604112 251263 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem (1078 bytes)
I1121 23:47:12.604152 251263 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/cert.pem (1123 bytes)
I1121 23:47:12.604194 251263 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/key.pem (1679 bytes)
I1121 23:47:12.604861 251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1121 23:47:12.637531 251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1121 23:47:12.669272 251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1121 23:47:12.700033 251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1121 23:47:12.730398 251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1121 23:47:12.766760 251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1121 23:47:12.814595 251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1121 23:47:12.848615 251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1121 23:47:12.879920 251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1121 23:47:12.912022 251263 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1121 23:47:12.933857 251263 ssh_runner.go:195] Run: openssl version
I1121 23:47:12.940506 251263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1121 23:47:12.953948 251263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1121 23:47:12.959503 251263 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:47 /usr/share/ca-certificates/minikubeCA.pem
I1121 23:47:12.959560 251263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1121 23:47:12.967627 251263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1121 23:47:12.981398 251263 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1121 23:47:12.986879 251263 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1121 23:47:12.986957 251263 kubeadm.go:401] StartCluster: {Name:addons-266876 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-266876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1121 23:47:12.987064 251263 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1121 23:47:12.987158 251263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1121 23:47:13.025633 251263 cri.go:89] found id: ""
I1121 23:47:13.025741 251263 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1121 23:47:13.038755 251263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1121 23:47:13.052370 251263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1121 23:47:13.065036 251263 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1121 23:47:13.065062 251263 kubeadm.go:158] found existing configuration files:
I1121 23:47:13.065139 251263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1121 23:47:13.077032 251263 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1121 23:47:13.077097 251263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1121 23:47:13.090073 251263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1121 23:47:13.101398 251263 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1121 23:47:13.101465 251263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1121 23:47:13.114396 251263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1121 23:47:13.126235 251263 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1121 23:47:13.126304 251263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1121 23:47:13.139694 251263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1121 23:47:13.151819 251263 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1121 23:47:13.151882 251263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1121 23:47:13.164512 251263 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1121 23:47:13.226756 251263 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
I1121 23:47:13.226832 251263 kubeadm.go:319] [preflight] Running pre-flight checks
I1121 23:47:13.345339 251263 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1121 23:47:13.345491 251263 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1121 23:47:13.345647 251263 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1121 23:47:13.359341 251263 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1121 23:47:13.436841 251263 out.go:252] - Generating certificates and keys ...
I1121 23:47:13.437031 251263 kubeadm.go:319] [certs] Using existing ca certificate authority
I1121 23:47:13.437171 251263 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1121 23:47:13.558105 251263 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1121 23:47:13.651102 251263 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1121 23:47:13.902476 251263 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1121 23:47:14.134826 251263 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1121 23:47:14.345459 251263 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1121 23:47:14.345645 251263 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-266876 localhost] and IPs [192.168.39.50 127.0.0.1 ::1]
I1121 23:47:14.583497 251263 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1121 23:47:14.583717 251263 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-266876 localhost] and IPs [192.168.39.50 127.0.0.1 ::1]
I1121 23:47:14.931062 251263 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1121 23:47:15.434495 251263 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1121 23:47:15.838983 251263 kubeadm.go:319] [certs] Generating "sa" key and public key
I1121 23:47:15.839096 251263 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1121 23:47:15.963541 251263 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1121 23:47:16.269311 251263 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1121 23:47:16.929016 251263 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1121 23:47:17.056928 251263 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1121 23:47:17.384976 251263 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1121 23:47:17.385309 251263 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1121 23:47:17.387510 251263 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1121 23:47:17.389626 251263 out.go:252] - Booting up control plane ...
I1121 23:47:17.389730 251263 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1121 23:47:17.389802 251263 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1121 23:47:17.389859 251263 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1121 23:47:17.408245 251263 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1121 23:47:17.408393 251263 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1121 23:47:17.416098 251263 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1121 23:47:17.416463 251263 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1121 23:47:17.416528 251263 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1121 23:47:17.572061 251263 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1121 23:47:17.572273 251263 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1121 23:47:18.575810 251263 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.003449114s
I1121 23:47:18.581453 251263 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1121 23:47:18.581592 251263 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.50:8443/livez
I1121 23:47:18.581745 251263 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1121 23:47:18.581872 251263 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1121 23:47:21.444953 251263 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.865438426s
I1121 23:47:22.473854 251263 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.895647364s
I1121 23:47:24.581213 251263 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.003558147s
I1121 23:47:24.600634 251263 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1121 23:47:24.621062 251263 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1121 23:47:24.638002 251263 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1121 23:47:24.638263 251263 kubeadm.go:319] [mark-control-plane] Marking the node addons-266876 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1121 23:47:24.652039 251263 kubeadm.go:319] [bootstrap-token] Using token: grn95n.s74ahx9w73uu3ca1
I1121 23:47:24.653732 251263 out.go:252] - Configuring RBAC rules ...
I1121 23:47:24.653880 251263 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1121 23:47:24.659155 251263 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1121 23:47:24.672314 251263 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1121 23:47:24.680496 251263 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1121 23:47:24.684483 251263 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1121 23:47:24.688905 251263 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1121 23:47:24.990519 251263 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1121 23:47:25.446692 251263 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1121 23:47:25.987142 251263 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1121 23:47:25.988495 251263 kubeadm.go:319]
I1121 23:47:25.988586 251263 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1121 23:47:25.988628 251263 kubeadm.go:319]
I1121 23:47:25.988755 251263 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1121 23:47:25.988774 251263 kubeadm.go:319]
I1121 23:47:25.988799 251263 kubeadm.go:319] mkdir -p $HOME/.kube
I1121 23:47:25.988879 251263 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1121 23:47:25.988970 251263 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1121 23:47:25.988990 251263 kubeadm.go:319]
I1121 23:47:25.989051 251263 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1121 23:47:25.989061 251263 kubeadm.go:319]
I1121 23:47:25.989146 251263 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1121 23:47:25.989158 251263 kubeadm.go:319]
I1121 23:47:25.989248 251263 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1121 23:47:25.989366 251263 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1121 23:47:25.989475 251263 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1121 23:47:25.989488 251263 kubeadm.go:319]
I1121 23:47:25.989602 251263 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1121 23:47:25.989728 251263 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1121 23:47:25.989738 251263 kubeadm.go:319]
I1121 23:47:25.989856 251263 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token grn95n.s74ahx9w73uu3ca1 \
I1121 23:47:25.990007 251263 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:7035eabdb6dc9c299f99d6120e0649f8a13de0412ab5d63e88dba6debc1b302c \
I1121 23:47:25.990049 251263 kubeadm.go:319] --control-plane
I1121 23:47:25.990057 251263 kubeadm.go:319]
I1121 23:47:25.990176 251263 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1121 23:47:25.990186 251263 kubeadm.go:319]
I1121 23:47:25.990300 251263 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token grn95n.s74ahx9w73uu3ca1 \
I1121 23:47:25.990438 251263 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:7035eabdb6dc9c299f99d6120e0649f8a13de0412ab5d63e88dba6debc1b302c
I1121 23:47:25.992560 251263 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1121 23:47:25.992602 251263 cni.go:84] Creating CNI manager for ""
I1121 23:47:25.992623 251263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1121 23:47:25.994543 251263 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1121 23:47:25.996106 251263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1121 23:47:26.010555 251263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1121 23:47:26.033834 251263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1121 23:47:26.033972 251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1121 23:47:26.033980 251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-266876 minikube.k8s.io/updated_at=2025_11_21T23_47_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=addons-266876 minikube.k8s.io/primary=true
I1121 23:47:26.084057 251263 ops.go:34] apiserver oom_adj: -16
I1121 23:47:26.203325 251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1121 23:47:26.704291 251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1121 23:47:27.204057 251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1121 23:47:27.704402 251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1121 23:47:28.204383 251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1121 23:47:28.704103 251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1121 23:47:29.204400 251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1121 23:47:29.704060 251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1121 23:47:30.204340 251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1121 23:47:30.314187 251263 kubeadm.go:1114] duration metric: took 4.280316282s to wait for elevateKubeSystemPrivileges
I1121 23:47:30.314239 251263 kubeadm.go:403] duration metric: took 17.327291456s to StartCluster
I1121 23:47:30.314270 251263 settings.go:142] acquiring lock: {Name:mkd124ec98418d6d2386a8f1a0e2e5ff6f0f99d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1121 23:47:30.314449 251263 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/21934-244751/kubeconfig
I1121 23:47:30.314952 251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/kubeconfig: {Name:mkbde37dbfe874aace118914fefd91b607e3afff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1121 23:47:30.315195 251263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1121 23:47:30.315224 251263 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I1121 23:47:30.315300 251263 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1121 23:47:30.315425 251263 addons.go:70] Setting yakd=true in profile "addons-266876"
I1121 23:47:30.315450 251263 addons.go:239] Setting addon yakd=true in "addons-266876"
I1121 23:47:30.315462 251263 addons.go:70] Setting inspektor-gadget=true in profile "addons-266876"
I1121 23:47:30.315485 251263 host.go:66] Checking if "addons-266876" exists ...
I1121 23:47:30.315491 251263 addons.go:239] Setting addon inspektor-gadget=true in "addons-266876"
I1121 23:47:30.315501 251263 addons.go:70] Setting default-storageclass=true in profile "addons-266876"
I1121 23:47:30.315529 251263 host.go:66] Checking if "addons-266876" exists ...
I1121 23:47:30.315528 251263 config.go:182] Loaded profile config "addons-266876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 23:47:30.315544 251263 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-266876"
I1121 23:47:30.315569 251263 addons.go:70] Setting cloud-spanner=true in profile "addons-266876"
I1121 23:47:30.315601 251263 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-266876"
I1121 23:47:30.315604 251263 addons.go:70] Setting registry-creds=true in profile "addons-266876"
I1121 23:47:30.315608 251263 addons.go:239] Setting addon cloud-spanner=true in "addons-266876"
I1121 23:47:30.315620 251263 addons.go:239] Setting addon registry-creds=true in "addons-266876"
I1121 23:47:30.315642 251263 host.go:66] Checking if "addons-266876" exists ...
I1121 23:47:30.315644 251263 host.go:66] Checking if "addons-266876" exists ...
I1121 23:47:30.315644 251263 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-266876"
I1121 23:47:30.315691 251263 host.go:66] Checking if "addons-266876" exists ...
I1121 23:47:30.315903 251263 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-266876"
I1121 23:47:30.315921 251263 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-266876"
I1121 23:47:30.315947 251263 host.go:66] Checking if "addons-266876" exists ...
I1121 23:47:30.316235 251263 addons.go:70] Setting ingress=true in profile "addons-266876"
I1121 23:47:30.316274 251263 addons.go:239] Setting addon ingress=true in "addons-266876"
I1121 23:47:30.316310 251263 host.go:66] Checking if "addons-266876" exists ...
I1121 23:47:30.316663 251263 addons.go:70] Setting registry=true in profile "addons-266876"
I1121 23:47:30.316697 251263 addons.go:239] Setting addon registry=true in "addons-266876"
I1121 23:47:30.316723 251263 host.go:66] Checking if "addons-266876" exists ...
I1121 23:47:30.317068 251263 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-266876"
I1121 23:47:30.317089 251263 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-266876"
I1121 23:47:30.317115 251263 host.go:66] Checking if "addons-266876" exists ...
I1121 23:47:30.317160 251263 addons.go:70] Setting gcp-auth=true in profile "addons-266876"
I1121 23:47:30.315588 251263 addons.go:70] Setting ingress-dns=true in profile "addons-266876"
I1121 23:47:30.317206 251263 mustload.go:66] Loading cluster: addons-266876
I1121 23:47:30.317231 251263 addons.go:239] Setting addon ingress-dns=true in "addons-266876"
I1121 23:47:30.317253 251263 addons.go:70] Setting metrics-server=true in profile "addons-266876"
I1121 23:47:30.317268 251263 host.go:66] Checking if "addons-266876" exists ...
I1121 23:47:30.317272 251263 addons.go:239] Setting addon metrics-server=true in "addons-266876"
I1121 23:47:30.317299 251263 host.go:66] Checking if "addons-266876" exists ...
I1121 23:47:30.317400 251263 config.go:182] Loaded profile config "addons-266876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 23:47:30.317441 251263 addons.go:70] Setting storage-provisioner=true in profile "addons-266876"
I1121 23:47:30.317460 251263 addons.go:239] Setting addon storage-provisioner=true in "addons-266876"
I1121 23:47:30.317490 251263 host.go:66] Checking if "addons-266876" exists ...
I1121 23:47:30.317944 251263 addons.go:70] Setting volcano=true in profile "addons-266876"
I1121 23:47:30.317973 251263 addons.go:239] Setting addon volcano=true in "addons-266876"
I1121 23:47:30.318000 251263 host.go:66] Checking if "addons-266876" exists ...
I1121 23:47:30.318181 251263 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-266876"
I1121 23:47:30.318207 251263 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-266876"
I1121 23:47:30.318457 251263 addons.go:70] Setting volumesnapshots=true in profile "addons-266876"
I1121 23:47:30.318489 251263 addons.go:239] Setting addon volumesnapshots=true in "addons-266876"
I1121 23:47:30.318514 251263 host.go:66] Checking if "addons-266876" exists ...
I1121 23:47:30.318636 251263 out.go:179] * Verifying Kubernetes components...
I1121 23:47:30.321872 251263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1121 23:47:30.323979 251263 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.5
I1121 23:47:30.324015 251263 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1121 23:47:30.324059 251263 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
I1121 23:47:30.324308 251263 addons.go:239] Setting addon default-storageclass=true in "addons-266876"
I1121 23:47:30.324852 251263 host.go:66] Checking if "addons-266876" exists ...
I1121 23:47:30.325430 251263 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
I1121 23:47:30.325460 251263 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
I1121 23:47:30.325834 251263 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1121 23:47:30.325536 251263 host.go:66] Checking if "addons-266876" exists ...
I1121 23:47:30.326179 251263 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
I1121 23:47:30.326187 251263 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1121 23:47:30.326317 251263 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
I1121 23:47:30.326336 251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1121 23:47:30.326936 251263 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1121 23:47:30.326998 251263 out.go:179] - Using image docker.io/registry:3.0.0
I1121 23:47:30.326980 251263 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
I1121 23:47:30.327044 251263 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
I1121 23:47:30.327543 251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
W1121 23:47:30.327112 251263 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1121 23:47:30.327823 251263 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1121 23:47:30.327894 251263 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1121 23:47:30.328316 251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1121 23:47:30.327908 251263 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1121 23:47:30.327937 251263 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
I1121 23:47:30.328129 251263 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-266876"
I1121 23:47:30.328994 251263 host.go:66] Checking if "addons-266876" exists ...
I1121 23:47:30.328605 251263 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1121 23:47:30.328665 251263 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1121 23:47:30.328694 251263 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1121 23:47:30.330248 251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1121 23:47:30.329173 251263 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1121 23:47:30.330310 251263 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1121 23:47:30.330603 251263 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1121 23:47:30.330604 251263 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1121 23:47:30.331083 251263 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1121 23:47:30.330604 251263 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1121 23:47:30.330630 251263 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1121 23:47:30.331264 251263 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1121 23:47:30.330646 251263 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
I1121 23:47:30.330654 251263 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1121 23:47:30.331990 251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1121 23:47:30.330703 251263 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1121 23:47:30.332116 251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1121 23:47:30.331545 251263 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1121 23:47:30.332194 251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1121 23:47:30.332542 251263 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
I1121 23:47:30.332882 251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1121 23:47:30.334102 251263 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1121 23:47:30.334436 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.335240 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.335327 251263 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
I1121 23:47:30.335355 251263 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1121 23:47:30.336111 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:30.336119 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.336147 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.336581 251263 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1121 23:47:30.336829 251263 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1121 23:47:30.336847 251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1121 23:47:30.336857 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:30.336898 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.336963 251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
I1121 23:47:30.337875 251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
I1121 23:47:30.337944 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:30.337986 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.338791 251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
I1121 23:47:30.338889 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.339032 251263 out.go:179] - Using image docker.io/busybox:stable
I1121 23:47:30.339781 251263 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1121 23:47:30.340483 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:30.340514 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.340666 251263 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1121 23:47:30.340695 251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1121 23:47:30.340797 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.341117 251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
I1121 23:47:30.341357 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.342122 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.342189 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:30.342220 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.342778 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:30.342795 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.342811 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.342975 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.343022 251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
I1121 23:47:30.343206 251263 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1121 23:47:30.343363 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.343504 251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
I1121 23:47:30.343566 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:30.343596 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.344162 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.344636 251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
I1121 23:47:30.344648 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:30.344718 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.344930 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:30.344977 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.345068 251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
I1121 23:47:30.345337 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:30.345379 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.345381 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.345342 251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
I1121 23:47:30.345569 251263 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1121 23:47:30.345654 251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
I1121 23:47:30.346248 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:30.346289 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.346396 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:30.346427 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.346508 251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
I1121 23:47:30.346706 251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
I1121 23:47:30.346995 251263 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1121 23:47:30.347011 251263 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1121 23:47:30.347328 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.347842 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:30.347873 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.348042 251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
I1121 23:47:30.348168 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.348658 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:30.348696 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.348924 251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
I1121 23:47:30.349955 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.350423 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:30.350455 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:30.350644 251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
W1121 23:47:30.571554 251263 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53184->192.168.39.50:22: read: connection reset by peer
I1121 23:47:30.571604 251263 retry.go:31] will retry after 237.893493ms: ssh: handshake failed: read tcp 192.168.39.1:53184->192.168.39.50:22: read: connection reset by peer
W1121 23:47:30.594670 251263 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53214->192.168.39.50:22: read: connection reset by peer
I1121 23:47:30.594718 251263 retry.go:31] will retry after 219.796697ms: ssh: handshake failed: read tcp 192.168.39.1:53214->192.168.39.50:22: read: connection reset by peer
W1121 23:47:30.648821 251263 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53232->192.168.39.50:22: read: connection reset by peer
I1121 23:47:30.648855 251263 retry.go:31] will retry after 280.923937ms: ssh: handshake failed: read tcp 192.168.39.1:53232->192.168.39.50:22: read: connection reset by peer
I1121 23:47:30.906273 251263 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1121 23:47:30.906343 251263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1121 23:47:31.303471 251263 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1121 23:47:31.303497 251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1121 23:47:31.303519 251263 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1121 23:47:31.329075 251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
I1121 23:47:31.372362 251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1121 23:47:31.401245 251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1121 23:47:31.443583 251263 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1121 23:47:31.443617 251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1121 23:47:31.448834 251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1121 23:47:31.496006 251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1121 23:47:31.498539 251263 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
I1121 23:47:31.498563 251263 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1121 23:47:31.569835 251263 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
I1121 23:47:31.569869 251263 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1121 23:47:31.572494 251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1121 23:47:31.624422 251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1121 23:47:31.627643 251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1121 23:47:31.900562 251263 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1121 23:47:31.900602 251263 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1121 23:47:32.010439 251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1121 23:47:32.024813 251263 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1121 23:47:32.024876 251263 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1121 23:47:32.170850 251263 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1121 23:47:32.170888 251263 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1121 23:47:32.219733 251263 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
I1121 23:47:32.219791 251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1121 23:47:32.404951 251263 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
I1121 23:47:32.404996 251263 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1121 23:47:32.544216 251263 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1121 23:47:32.544253 251263 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1121 23:47:32.578250 251263 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1121 23:47:32.578284 251263 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1121 23:47:32.653254 251263 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1121 23:47:32.653285 251263 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1121 23:47:32.741481 251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1121 23:47:32.794874 251263 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
I1121 23:47:32.794909 251263 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1121 23:47:32.881148 251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1121 23:47:33.067639 251263 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1121 23:47:33.067700 251263 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1121 23:47:33.067715 251263 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1121 23:47:33.067738 251263 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1121 23:47:33.271805 251263 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
I1121 23:47:33.271834 251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1121 23:47:33.312325 251263 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1121 23:47:33.312356 251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1121 23:47:33.436072 251263 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1121 23:47:33.436107 251263 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1121 23:47:33.708500 251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1121 23:47:33.708927 251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1121 23:47:34.040431 251263 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1121 23:47:34.040474 251263 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1121 23:47:34.408465 251263 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.502153253s)
I1121 23:47:34.408519 251263 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.502134143s)
I1121 23:47:34.408554 251263 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1121 23:47:34.408578 251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.105046996s)
I1121 23:47:34.409219 251263 node_ready.go:35] waiting up to 6m0s for node "addons-266876" to be "Ready" ...
I1121 23:47:34.415213 251263 node_ready.go:49] node "addons-266876" is "Ready"
I1121 23:47:34.415248 251263 node_ready.go:38] duration metric: took 6.005684ms for node "addons-266876" to be "Ready" ...
I1121 23:47:34.415268 251263 api_server.go:52] waiting for apiserver process to appear ...
I1121 23:47:34.415324 251263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1121 23:47:34.664082 251263 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1121 23:47:34.664113 251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1121 23:47:34.918427 251263 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-266876" context rescaled to 1 replicas
I1121 23:47:35.149255 251263 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1121 23:47:35.149293 251263 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1121 23:47:35.732395 251263 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1121 23:47:35.732425 251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1121 23:47:36.406188 251263 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1121 23:47:36.406216 251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1121 23:47:36.897571 251263 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1121 23:47:36.897608 251263 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1121 23:47:37.313754 251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1121 23:47:37.790744 251263 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1121 23:47:37.793928 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:37.794570 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:37.794603 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:37.794806 251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
I1121 23:47:38.530200 251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (7.201079248s)
I1121 23:47:38.530311 251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.15790373s)
I1121 23:47:38.530349 251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.129067228s)
I1121 23:47:38.530410 251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.081551551s)
I1121 23:47:38.530485 251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.034438414s)
I1121 23:47:38.530531 251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.958009964s)
I1121 23:47:38.530576 251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.90611639s)
I1121 23:47:38.530688 251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.902998512s)
W1121 23:47:38.596091 251263 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I1121 23:47:38.696471 251263 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1121 23:47:39.049239 251263 addons.go:239] Setting addon gcp-auth=true in "addons-266876"
I1121 23:47:39.049319 251263 host.go:66] Checking if "addons-266876" exists ...
I1121 23:47:39.051589 251263 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1121 23:47:39.054431 251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:39.054905 251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
I1121 23:47:39.054946 251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
I1121 23:47:39.055124 251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
I1121 23:47:40.911949 251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.901459816s)
I1121 23:47:40.912003 251263 addons.go:495] Verifying addon ingress=true in "addons-266876"
I1121 23:47:40.912027 251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.170505015s)
I1121 23:47:40.912060 251263 addons.go:495] Verifying addon registry=true in "addons-266876"
I1121 23:47:40.912106 251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.030918863s)
I1121 23:47:40.912208 251263 addons.go:495] Verifying addon metrics-server=true in "addons-266876"
I1121 23:47:40.913759 251263 out.go:179] * Verifying ingress addon...
I1121 23:47:40.913769 251263 out.go:179] * Verifying registry addon...
I1121 23:47:40.916006 251263 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1121 23:47:40.916028 251263 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1121 23:47:41.040220 251263 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1121 23:47:41.040250 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:41.043403 251263 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1121 23:47:41.043428 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:41.261875 251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.5533177s)
W1121 23:47:41.261945 251263 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1121 23:47:41.261983 251263 retry.go:31] will retry after 128.365697ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1121 23:47:41.262010 251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.553035838s)
I1121 23:47:41.262077 251263 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.846726255s)
I1121 23:47:41.262115 251263 api_server.go:72] duration metric: took 10.946861397s to wait for apiserver process to appear ...
I1121 23:47:41.262194 251263 api_server.go:88] waiting for apiserver healthz status ...
I1121 23:47:41.262220 251263 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
I1121 23:47:41.263907 251263 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-266876 service yakd-dashboard -n yakd-dashboard
I1121 23:47:41.282742 251263 api_server.go:279] https://192.168.39.50:8443/healthz returned 200:
ok
I1121 23:47:41.287497 251263 api_server.go:141] control plane version: v1.34.1
I1121 23:47:41.287535 251263 api_server.go:131] duration metric: took 25.332513ms to wait for apiserver health ...
I1121 23:47:41.287548 251263 system_pods.go:43] waiting for kube-system pods to appear ...
I1121 23:47:41.306603 251263 system_pods.go:59] 16 kube-system pods found
I1121 23:47:41.306658 251263 system_pods.go:61] "amd-gpu-device-plugin-pd4sx" [88fffae7-a3c2-46ef-a382-867c1f45dd2f] Running
I1121 23:47:41.306672 251263 system_pods.go:61] "coredns-66bc5c9577-kmf4p" [c2dae9ee-3a8e-4c5c-9880-256aae9475c1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1121 23:47:41.306696 251263 system_pods.go:61] "coredns-66bc5c9577-tgk67" [ad56ae13-a7c4-44e3-a817-73aa300110b6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1121 23:47:41.306706 251263 system_pods.go:61] "etcd-addons-266876" [6329b2aa-df0d-4707-8094-52ed6a9b70fa] Running
I1121 23:47:41.306714 251263 system_pods.go:61] "kube-apiserver-addons-266876" [d6500ed5-1e8a-40e7-8761-ce5d9b817580] Running
I1121 23:47:41.306720 251263 system_pods.go:61] "kube-controller-manager-addons-266876" [9afdbaac-04de-4ae4-a1a5-ab74382c1ee4] Running
I1121 23:47:41.306728 251263 system_pods.go:61] "kube-ingress-dns-minikube" [4c8445af-f050-4525-a580-c6cb45567d21] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1121 23:47:41.306737 251263 system_pods.go:61] "kube-proxy-d6jsf" [8c9f1dbf-19b7-4f19-8f33-11b6886f1237] Running
I1121 23:47:41.306742 251263 system_pods.go:61] "kube-scheduler-addons-266876" [c367cb26-5fd3-4d41-8752-d8b7d6fe6c13] Running
I1121 23:47:41.306749 251263 system_pods.go:61] "metrics-server-85b7d694d7-tcd7p" [8b9a51ce-61d0-430c-98de-9174d78d47d6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1121 23:47:41.306759 251263 system_pods.go:61] "nvidia-device-plugin-daemonset-6fx49" [4603aca8-96ff-429c-870e-aaa1a7987b07] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1121 23:47:41.306768 251263 system_pods.go:61] "registry-6b586f9694-g5tcd" [5c882c89-9d82-4657-b7a0-d20f145866ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1121 23:47:41.306780 251263 system_pods.go:61] "registry-creds-764b6fb674-c6k42" [b80bedd0-b303-4ba0-9c40-f2fd2464333c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1121 23:47:41.306789 251263 system_pods.go:61] "registry-proxy-xjdbm" [9ce7be2d-00fc-42ac-8617-38e2d4ecac77] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1121 23:47:41.306795 251263 system_pods.go:61] "snapshot-controller-7d9fbc56b8-r57wx" [136bb70d-9950-46db-83d9-09b543dc4f72] Pending
I1121 23:47:41.306803 251263 system_pods.go:61] "storage-provisioner" [2855a3de-b990-447c-b094-274b5becf1da] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1121 23:47:41.306812 251263 system_pods.go:74] duration metric: took 19.257263ms to wait for pod list to return data ...
I1121 23:47:41.306823 251263 default_sa.go:34] waiting for default service account to be created ...
I1121 23:47:41.323263 251263 default_sa.go:45] found service account: "default"
I1121 23:47:41.323302 251263 default_sa.go:55] duration metric: took 16.457401ms for default service account to be created ...
I1121 23:47:41.323317 251263 system_pods.go:116] waiting for k8s-apps to be running ...
I1121 23:47:41.337749 251263 system_pods.go:86] 17 kube-system pods found
I1121 23:47:41.337783 251263 system_pods.go:89] "amd-gpu-device-plugin-pd4sx" [88fffae7-a3c2-46ef-a382-867c1f45dd2f] Running
I1121 23:47:41.337791 251263 system_pods.go:89] "coredns-66bc5c9577-kmf4p" [c2dae9ee-3a8e-4c5c-9880-256aae9475c1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1121 23:47:41.337797 251263 system_pods.go:89] "coredns-66bc5c9577-tgk67" [ad56ae13-a7c4-44e3-a817-73aa300110b6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1121 23:47:41.337803 251263 system_pods.go:89] "etcd-addons-266876" [6329b2aa-df0d-4707-8094-52ed6a9b70fa] Running
I1121 23:47:41.337808 251263 system_pods.go:89] "kube-apiserver-addons-266876" [d6500ed5-1e8a-40e7-8761-ce5d9b817580] Running
I1121 23:47:41.337812 251263 system_pods.go:89] "kube-controller-manager-addons-266876" [9afdbaac-04de-4ae4-a1a5-ab74382c1ee4] Running
I1121 23:47:41.337817 251263 system_pods.go:89] "kube-ingress-dns-minikube" [4c8445af-f050-4525-a580-c6cb45567d21] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1121 23:47:41.337821 251263 system_pods.go:89] "kube-proxy-d6jsf" [8c9f1dbf-19b7-4f19-8f33-11b6886f1237] Running
I1121 23:47:41.337826 251263 system_pods.go:89] "kube-scheduler-addons-266876" [c367cb26-5fd3-4d41-8752-d8b7d6fe6c13] Running
I1121 23:47:41.337831 251263 system_pods.go:89] "metrics-server-85b7d694d7-tcd7p" [8b9a51ce-61d0-430c-98de-9174d78d47d6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1121 23:47:41.337839 251263 system_pods.go:89] "nvidia-device-plugin-daemonset-6fx49" [4603aca8-96ff-429c-870e-aaa1a7987b07] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1121 23:47:41.337844 251263 system_pods.go:89] "registry-6b586f9694-g5tcd" [5c882c89-9d82-4657-b7a0-d20f145866ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1121 23:47:41.337849 251263 system_pods.go:89] "registry-creds-764b6fb674-c6k42" [b80bedd0-b303-4ba0-9c40-f2fd2464333c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1121 23:47:41.337854 251263 system_pods.go:89] "registry-proxy-xjdbm" [9ce7be2d-00fc-42ac-8617-38e2d4ecac77] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1121 23:47:41.337876 251263 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gcprx" [38cf49f5-ed6e-4aa5-bdfe-2494e5763f39] Pending
I1121 23:47:41.337881 251263 system_pods.go:89] "snapshot-controller-7d9fbc56b8-r57wx" [136bb70d-9950-46db-83d9-09b543dc4f72] Pending
I1121 23:47:41.337885 251263 system_pods.go:89] "storage-provisioner" [2855a3de-b990-447c-b094-274b5becf1da] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1121 23:47:41.337897 251263 system_pods.go:126] duration metric: took 14.572276ms to wait for k8s-apps to be running ...
I1121 23:47:41.337909 251263 system_svc.go:44] waiting for kubelet service to be running ....
I1121 23:47:41.337964 251263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1121 23:47:41.391055 251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1121 23:47:41.444001 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:41.452955 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:41.927933 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:41.929997 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:42.455799 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:42.455860 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:42.926969 251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.613140073s)
I1121 23:47:42.927027 251263 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-266876"
I1121 23:47:42.927049 251263 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.875424504s)
I1121 23:47:42.927114 251263 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.589124511s)
I1121 23:47:42.927233 251263 system_svc.go:56] duration metric: took 1.589318384s WaitForService to wait for kubelet
I1121 23:47:42.927248 251263 kubeadm.go:587] duration metric: took 12.611994145s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1121 23:47:42.927275 251263 node_conditions.go:102] verifying NodePressure condition ...
I1121 23:47:42.928903 251263 out.go:179] * Verifying csi-hostpath-driver addon...
I1121 23:47:42.928918 251263 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
I1121 23:47:42.930225 251263 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1121 23:47:42.930998 251263 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1121 23:47:42.931460 251263 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1121 23:47:42.931483 251263 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1121 23:47:42.948957 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:42.956545 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:42.972599 251263 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1121 23:47:42.972629 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:42.991010 251263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1121 23:47:42.991043 251263 node_conditions.go:123] node cpu capacity is 2
I1121 23:47:42.991060 251263 node_conditions.go:105] duration metric: took 63.779822ms to run NodePressure ...
I1121 23:47:42.991073 251263 start.go:242] waiting for startup goroutines ...
I1121 23:47:43.000454 251263 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1121 23:47:43.000488 251263 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1121 23:47:43.064083 251263 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1121 23:47:43.064114 251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1121 23:47:43.143418 251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1121 23:47:43.424997 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:43.428350 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:43.438981 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:43.744014 251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.352903636s)
I1121 23:47:43.926051 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:43.926403 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:43.939557 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:44.470136 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:44.470507 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:44.470583 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:44.610973 251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.467509011s)
I1121 23:47:44.612084 251263 addons.go:495] Verifying addon gcp-auth=true in "addons-266876"
I1121 23:47:44.614664 251263 out.go:179] * Verifying gcp-auth addon...
I1121 23:47:44.617037 251263 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1121 23:47:44.679516 251263 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1121 23:47:44.679539 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:44.938585 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:44.939917 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:44.945173 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:45.125511 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:45.423184 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:45.424380 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:45.438459 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:45.621893 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:45.929603 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:45.933258 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:45.938917 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:46.123924 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:46.423081 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:46.425799 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:46.437310 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:46.623291 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:46.925943 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:46.926661 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:46.940308 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:47.120567 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:47.421527 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:47.422825 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:47.435356 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:47.622778 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:47.922908 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:47.925722 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:47.937113 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:48.122097 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:48.423467 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:48.423610 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:48.435064 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:48.622264 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:48.926889 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:48.926907 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:48.935809 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:49.124186 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:49.424165 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:49.424235 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:49.436947 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:49.623380 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:49.926485 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:49.926568 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:49.934726 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:50.149039 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:50.426766 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:50.427550 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:50.435800 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:50.623645 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:50.923166 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:50.924899 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:50.937932 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:51.120970 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:51.422946 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:51.423964 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:51.437143 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:51.623848 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:51.924227 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:51.929471 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:51.939629 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:52.261854 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:52.424962 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:52.428597 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:52.436986 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:52.622910 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:52.922271 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:52.924973 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:52.938365 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:53.121701 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:53.425753 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:53.438148 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:53.440564 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:53.709895 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:53.929068 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:53.931342 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:53.938714 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:54.122158 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:54.425360 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:54.428330 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:54.435907 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:54.623125 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:54.926160 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:54.926269 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:54.934959 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:55.123657 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:55.422851 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:55.423292 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:55.436852 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:55.621782 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:56.184531 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:56.185319 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:56.185351 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:56.185436 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:56.422356 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:56.422605 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:56.437477 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:56.621926 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:56.920916 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:56.921374 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:56.935238 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:57.120293 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:57.422033 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:57.424320 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:57.435388 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:57.621432 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:57.920963 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:57.924452 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:57.935839 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:58.121584 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:58.425091 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:58.425156 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:58.435426 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:58.635444 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:58.922739 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:58.923871 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:58.936112 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:59.123863 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:59.426020 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:59.430811 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:59.438808 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:47:59.623106 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:47:59.931900 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:47:59.936038 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:47:59.937959 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:00.122854 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:00.422993 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:48:00.424741 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:00.436196 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:00.620554 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:00.921652 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:48:00.922569 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:00.935087 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:01.123823 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:01.423850 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:48:01.425512 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:01.434928 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:01.621491 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:01.923505 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:48:01.924905 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:01.937201 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:02.121624 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:02.423602 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:02.423787 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:48:02.435107 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:02.620510 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:02.919996 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:02.921258 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:48:02.934427 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:03.121234 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:03.422602 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:03.422661 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:48:03.435654 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:03.627887 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:03.923184 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:03.923492 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:48:03.943565 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:04.122960 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:04.421986 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:04.422381 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:48:04.435361 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:04.623019 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:04.923848 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:48:04.925058 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:04.935882 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:05.121708 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:05.421718 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:05.421805 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:48:05.434879 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:05.622686 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:05.922353 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:48:05.923753 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:05.936216 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:06.120868 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:06.423712 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:48:06.423899 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:06.439806 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:06.625663 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:06.922260 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:48:06.922652 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:06.936062 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:07.121430 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:07.424027 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:07.424073 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1121 23:48:07.435511 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:07.622294 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:07.921125 251263 kapi.go:107] duration metric: took 27.005089483s to wait for kubernetes.io/minikube-addons=registry ...
I1121 23:48:07.923396 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:07.939621 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:08.121478 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:08.519292 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:08.522400 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:08.626487 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:08.919824 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:08.935099 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:09.123034 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:09.427247 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:09.439663 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:09.630747 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:09.924829 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:09.937762 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:10.126266 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:10.423912 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:10.442758 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:10.829148 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:10.928186 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:10.938788 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:11.126344 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:11.423503 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:11.440161 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:11.628256 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:11.922200 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:12.026774 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:12.122410 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:12.425763 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:12.435748 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:12.620552 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:12.954050 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:12.957856 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:13.126813 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:13.421360 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:13.435025 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:13.629500 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:13.922707 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:13.935410 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:14.123341 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:14.426174 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:14.436803 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:14.622210 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:14.941433 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:14.941557 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:15.122789 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:15.422344 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:15.435838 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:15.620803 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:15.922769 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:15.936263 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:16.123330 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:16.420710 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:16.437443 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:16.622053 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:16.922695 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:16.940782 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:17.241963 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:17.422836 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:17.436564 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:17.623372 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:17.919854 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:17.948897 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:18.124153 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:18.423733 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:18.436717 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:18.622046 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:18.922805 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:18.935793 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:19.122329 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:19.425051 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:19.439118 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:19.619916 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:19.920748 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:19.937662 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:20.128846 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:20.427312 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:20.441072 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:20.627540 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:20.922225 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:20.935498 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:21.125438 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:21.421980 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:21.435607 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:21.622394 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:21.920638 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:21.935580 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:22.121779 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:22.425387 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:22.436106 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:22.622379 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:22.922035 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:22.939454 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:23.123644 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:23.422127 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:23.437099 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:23.621255 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:23.921598 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:23.936278 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:24.121938 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:24.421559 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:24.435263 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:24.621048 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:24.921427 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:24.936154 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:25.128780 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:25.436990 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:25.447989 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:25.627750 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:25.925784 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:25.936653 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:26.125097 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:26.421139 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:26.435288 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:26.621354 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:26.979865 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:26.982130 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:27.121596 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:27.421737 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:27.436413 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:27.622223 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:27.923259 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:27.938238 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:28.122777 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:28.422102 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:28.435098 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:28.624943 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:28.923578 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:28.934884 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:29.123227 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:29.422918 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:29.440055 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:29.621947 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:29.924766 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:29.943765 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:30.125218 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:30.427521 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:30.435473 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:30.622346 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:30.926321 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:30.935211 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:31.125820 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:31.423165 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:31.435981 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:31.624574 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:31.924255 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:31.937572 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:32.123297 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:32.420253 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:32.435092 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:32.620642 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:32.924708 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:32.936867 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:33.122959 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:33.421260 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:33.435115 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:33.622355 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:33.922446 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:33.937891 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:34.121936 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:34.422837 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:34.436876 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:34.621392 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:34.922989 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:34.936968 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:35.121994 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:35.420314 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:35.435229 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1121 23:48:35.620372 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:35.921246 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:35.935379 251263 kapi.go:107] duration metric: took 53.004380156s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1121 23:48:36.121002 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:36.421297 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:36.620475 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:36.920737 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:37.121903 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:37.420740 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:37.621573 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:37.920470 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:38.120871 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:38.419747 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:38.620870 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:38.919569 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:39.121472 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:39.420632 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:39.621914 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:39.919274 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:40.120595 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:40.420718 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:40.621509 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:40.920672 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:41.121166 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:41.422011 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:41.622380 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:41.921196 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:42.120596 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:42.420828 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:42.621388 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:42.921558 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:43.121925 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:43.419853 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:43.622393 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:43.920887 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:44.121285 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:44.420735 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:44.622063 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:44.920303 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:45.123622 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:45.422460 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:45.623240 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:45.938878 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:46.121145 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:46.421462 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:46.621556 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:46.920539 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:47.123242 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:47.434774 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:47.623534 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:47.929223 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:48.125077 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:48.421704 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:48.623369 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:48.922650 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:49.123639 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:49.421456 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:49.624574 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:49.931049 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:50.124348 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:50.420556 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:50.622234 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:50.924025 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:51.124075 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:51.423011 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:51.623295 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:51.920670 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:52.121233 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:52.424341 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:52.621172 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:52.921299 251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1121 23:48:53.121769 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:53.420110 251263 kapi.go:107] duration metric: took 1m12.504106807s to wait for app.kubernetes.io/name=ingress-nginx ...
I1121 23:48:53.621962 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:54.127660 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:54.626400 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:55.122945 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:55.724403 251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1121 23:48:56.123402 251263 kapi.go:107] duration metric: took 1m11.506366647s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1121 23:48:56.125238 251263 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-266876 cluster.
I1121 23:48:56.126693 251263 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1121 23:48:56.128133 251263 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1121 23:48:56.129655 251263 out.go:179] * Enabled addons: amd-gpu-device-plugin, inspektor-gadget, ingress-dns, registry-creds, nvidia-device-plugin, cloud-spanner, storage-provisioner, default-storageclass, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I1121 23:48:56.131230 251263 addons.go:530] duration metric: took 1m25.815935443s for enable addons: enabled=[amd-gpu-device-plugin inspektor-gadget ingress-dns registry-creds nvidia-device-plugin cloud-spanner storage-provisioner default-storageclass metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I1121 23:48:56.131297 251263 start.go:247] waiting for cluster config update ...
I1121 23:48:56.131318 251263 start.go:256] writing updated cluster config ...
I1121 23:48:56.131603 251263 ssh_runner.go:195] Run: rm -f paused
I1121 23:48:56.139138 251263 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1121 23:48:56.143255 251263 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tgk67" in "kube-system" namespace to be "Ready" or be gone ...
I1121 23:48:56.149223 251263 pod_ready.go:94] pod "coredns-66bc5c9577-tgk67" is "Ready"
I1121 23:48:56.149248 251263 pod_ready.go:86] duration metric: took 5.967724ms for pod "coredns-66bc5c9577-tgk67" in "kube-system" namespace to be "Ready" or be gone ...
I1121 23:48:56.152622 251263 pod_ready.go:83] waiting for pod "etcd-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
I1121 23:48:56.158325 251263 pod_ready.go:94] pod "etcd-addons-266876" is "Ready"
I1121 23:48:56.158348 251263 pod_ready.go:86] duration metric: took 5.699178ms for pod "etcd-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
I1121 23:48:56.161017 251263 pod_ready.go:83] waiting for pod "kube-apiserver-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
I1121 23:48:56.165701 251263 pod_ready.go:94] pod "kube-apiserver-addons-266876" is "Ready"
I1121 23:48:56.165731 251263 pod_ready.go:86] duration metric: took 4.68133ms for pod "kube-apiserver-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
I1121 23:48:56.167794 251263 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
I1121 23:48:56.546100 251263 pod_ready.go:94] pod "kube-controller-manager-addons-266876" is "Ready"
I1121 23:48:56.546140 251263 pod_ready.go:86] duration metric: took 378.321116ms for pod "kube-controller-manager-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
I1121 23:48:56.744763 251263 pod_ready.go:83] waiting for pod "kube-proxy-d6jsf" in "kube-system" namespace to be "Ready" or be gone ...
I1121 23:48:57.145028 251263 pod_ready.go:94] pod "kube-proxy-d6jsf" is "Ready"
I1121 23:48:57.145065 251263 pod_ready.go:86] duration metric: took 400.263759ms for pod "kube-proxy-d6jsf" in "kube-system" namespace to be "Ready" or be gone ...
I1121 23:48:57.344109 251263 pod_ready.go:83] waiting for pod "kube-scheduler-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
I1121 23:48:57.744881 251263 pod_ready.go:94] pod "kube-scheduler-addons-266876" is "Ready"
I1121 23:48:57.744924 251263 pod_ready.go:86] duration metric: took 400.779811ms for pod "kube-scheduler-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
I1121 23:48:57.744942 251263 pod_ready.go:40] duration metric: took 1.605761032s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1121 23:48:57.792759 251263 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
I1121 23:48:57.794548 251263 out.go:179] * Done! kubectl is now configured to use "addons-266876" cluster and "default" namespace by default
==> CRI-O <==
Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.553209868Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763769105553183131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:532066,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a41ebed2-b835-48fb-a024-b4f1e74207d0 name=/runtime.v1.ImageService/ImageFsInfo
Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.554701091Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ee498f8-9be7-4b0d-baeb-52a497d97a67 name=/runtime.v1.RuntimeService/ListContainers
Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.555172903Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ee498f8-9be7-4b0d-baeb-52a497d97a67 name=/runtime.v1.RuntimeService/ListContainers
Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.556060231Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:991f92b0bd577e0738eef35d65b7c9638d3df53ccea140bf89bfa778f911574f,PodSandboxId:f7f9ecdee49d2fbed73c95266531e98528194cc29d1af1e15c07c6a5e790026a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763768959705557696,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e3ed8c7-5788-4d41-aba1-71043fc65fb1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1205f66bfddc482ac5d5dd1e86224c67e867616e61123b413f3bb6856473dc12,PodSandboxId:7a5080c12c12abdaab939ff714e1e919b116a416135fb5a4c0954b2d73aa77d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763768941325630164,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6b5956ac-11bb-458f-953a-f0fa68bf575e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f168b4d8b7da74efedb3be41e39c6d07020b9698695572f30b74f190a4d8dac,PodSandboxId:6d7ec67173c108730a451b146148cf342b99db36b05e9ea513110f7e26d0585e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763768932397704115,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-lg7z6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6a72ef50-d6e3-496e-bb81-685892037954,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:51813a3108d9e54ce1c3496176ac5114e7bd1188f2c3673f4a4a3480910eced6,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351f
d438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763768914769733150,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491a8ff7c586acede1f8b3b37821df605946465cc57d997a527260e81bc84cbe,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763768913042350736,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62345e24511bafa136f68a223ce7ed0c511a449ccaac17d536939d218364c8e0,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763768911235493555,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c36592147c998eee903b15108ec385188d6a10ba82bdbc75a1e806aedb354e7,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c
9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763768910213861282,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552ab85d759ae6b592b4d62982e120e0f046fdad6cf73d39fa8e079973301b19,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt
:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763768908471046022,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b904d30a44673800c0c3034a976f6ac03bbb3ec299f6d92bb1a5c6ea170a7c57,PodSandboxId:ff134a61cd64ed6f5542d7c8c8469ae269bd32b9fdf955f510ccf7e0b5589fb8,Meta
data:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763768907183648311,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dfe8e9b-0142-42cf-ba29-27aeadf91605,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4683ce225f87d35eb79e6cefdb9d7c48be7cc40e3230995b8b0843525d0bd27,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7
d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763768905559322925,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2e4d571c23b9bf
7d4bfb72e3338c12264469ce70e7e72754afd69967515d13,PodSandboxId:5007bb0b80f021b12e7f6a9a43425ddf2f36685c53a522cd08d8b714c4ec20eb,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763768903867338270,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79de7084-4282-49f8-a4d1-582323611ce3,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:d36db081fc46e4a769de524c439df3776fa94dd533d426b7d39c2e1306653d01,PodSandboxId:554f3c9987e3066901d6ca4d92840af21c420dd97f4a4542964dfbb8d915e03e,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763768903014677441,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ht8dl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 85e84c47-6bc3-4409-8954-c24ef4d80f99,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9171d1cd70aebc77
78cc2ae6b609dbe0a17d4a5c28a86a5944f33b666258a45,PodSandboxId:3414eb7a0316ace15e9a899adee597efdbbd854673912b762e969800af6a4f8a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763768902335122991,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xq799,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0cdf45b5-b337-4541-83d3-b7fdc232f1e1,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:37dea366f964b8791137712d958e380263c762d6943592d23f145fad119cd6b5,PodSandboxId:1e73211f223b9745771ca2d0de7f25252821645aa5ecdbedb9590018486b8b7e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900663296805,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-gcprx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cf49f5-ed6e-4aa5-bdfe-2494e5763f39,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f748bb4b27c533f7b87016c8a98346dcf32afaef8d230d73d0764252cbb72f,PodSandboxId:1c267215c3e5b6dac05194130ddb58527757745ce779475a28dd1c84610883b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900552896022,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-r57wx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136bb70d-9950-46db-83d9-09b543dc4f72,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe7bb60492b04fd1b087025749469403adeef011272e8f3f22c00ada731cdcb3,PodSandboxId:15b64b5856939f8bac45fb57619eff6fafc932dc05ba97fe1943560b384bd630,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763768898866225812,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vl5f9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 09a904d5-755f-4f1f-9525-b10e4e4b57a7,},Annotations:map[string]string{io.kubernetes.container.hash
: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bc290854e78459afc859cfdef271a0dcca5688dfdec552b77d3bddd2556238,PodSandboxId:182146df6179175ac72d7de036cbf942c44e60401b302493f3eb9d4f09c65f1c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763768876563315596,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8445af-f050-4525-a580-c6cb45567d21,},Annotations:map[
string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fac18e2a4ec25dc356850c017cf9e035abf1bb8a3d580cf0053ef797061409,PodSandboxId:f1662e37013476f1ac5ede7d21406a137d8e8672c36dcf49068d30172dc639f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763768860250918214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 2855a3de-b990-447c-b094-274b5becf1da,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d414f30f9b272529b258344187ca2317cbc5a4141f4ffe1bc6fa0f7df80bd5bb,PodSandboxId:79f2d64c3813a612bcd08d986b458be1fc1d2f0a3922d4db70b605b18db55f18,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763768857987378401,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pd4s
x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88fffae7-a3c2-46ef-a382-867c1f45dd2f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e880e3438bfbb021c8b745fd3f9eff8bd901a71ba1ac5890af86a6f9ccce7198,PodSandboxId:9607023c4fe8e371d977e1e4a2b52b0e80675b763c0cd2b3ae209db00b96f2cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763768852094801650,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tgk67,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: ad56ae13-a7c4-44e3-a817-73aa300110b6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba59e7c8953d9801a0002a71d6b0d76ceb61d8240d18301e56bb626f422fb40,PodSandboxId:1ce41f042f494eef5d0be46b1db7d599bafdcc97fef69e0d7f1782bd275c54ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6
a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763768851042427654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6jsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c9f1dbf-19b7-4f19-8f33-11b6886f1237,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d89e7dd43a03bbcb90d35b0084800be3d656bfa6da9cea1a1cc88c7dc51493d,PodSandboxId:a6e11d2b9834f78610e1487003d17adb09fe565622e235df1217303546294639,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763768839147876729,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d9c952f476472031bed61db83e3c978,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5891e44197cae1d6761a70d34df341e353e30c01b0d5a7326a9560ace3813e,PodSandboxId:212b2600cae8f28bb69999f005d0b52383e46a97173ab430efc504c81b7dd8ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSp
ec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763768839130699997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca8736316ec035d06c4ec08eb70b85a,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2349c8754b0dab18ba46ec5bfeab82bbf19463f511389a62129f35aebcece7,PodSandboxId:7fb7e928bee471d4ec16ce9fb677f233172c530080bee3cf5baf5b1
c28826973,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763768839107116372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0a4452486189aa3ae5b593dc3a43cac,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a216f1821ac9afeecc64f7303745
36676eea6c11ca4c8f9bde7913e3b74b219,PodSandboxId:43d68a4f9086a85d2e8c987fa17f958b95725f1d9f37b716010e99d58676be1a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763768839078348144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e0cb0275164bea778c164d97826c53,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ee498f8-9be7-4b0d-baeb-52a497d97a67 name=/runtime.v1.RuntimeService/ListContainers
Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.595250671Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9a0175ce-1a4e-4773-acd3-56b792be6602 name=/runtime.v1.RuntimeService/Version
Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.595349281Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9a0175ce-1a4e-4773-acd3-56b792be6602 name=/runtime.v1.RuntimeService/Version
Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.596695558Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=05f73e53-9a5b-4231-ada4-99bc27d0ee89 name=/runtime.v1.ImageService/ImageFsInfo
Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.597866488Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763769105597839232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:532066,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=05f73e53-9a5b-4231-ada4-99bc27d0ee89 name=/runtime.v1.ImageService/ImageFsInfo
Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.599131293Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b99bbaa3-2b86-4b06-ab3b-aac04dce552c name=/runtime.v1.RuntimeService/ListContainers
Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.599333747Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b99bbaa3-2b86-4b06-ab3b-aac04dce552c name=/runtime.v1.RuntimeService/ListContainers
Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.600424108Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:991f92b0bd577e0738eef35d65b7c9638d3df53ccea140bf89bfa778f911574f,PodSandboxId:f7f9ecdee49d2fbed73c95266531e98528194cc29d1af1e15c07c6a5e790026a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763768959705557696,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e3ed8c7-5788-4d41-aba1-71043fc65fb1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1205f66bfddc482ac5d5dd1e86224c67e867616e61123b413f3bb6856473dc12,PodSandboxId:7a5080c12c12abdaab939ff714e1e919b116a416135fb5a4c0954b2d73aa77d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763768941325630164,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6b5956ac-11bb-458f-953a-f0fa68bf575e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f168b4d8b7da74efedb3be41e39c6d07020b9698695572f30b74f190a4d8dac,PodSandboxId:6d7ec67173c108730a451b146148cf342b99db36b05e9ea513110f7e26d0585e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763768932397704115,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-lg7z6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6a72ef50-d6e3-496e-bb81-685892037954,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:51813a3108d9e54ce1c3496176ac5114e7bd1188f2c3673f4a4a3480910eced6,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351f
d438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763768914769733150,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491a8ff7c586acede1f8b3b37821df605946465cc57d997a527260e81bc84cbe,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763768913042350736,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62345e24511bafa136f68a223ce7ed0c511a449ccaac17d536939d218364c8e0,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763768911235493555,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c36592147c998eee903b15108ec385188d6a10ba82bdbc75a1e806aedb354e7,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c
9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763768910213861282,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552ab85d759ae6b592b4d62982e120e0f046fdad6cf73d39fa8e079973301b19,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt
:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763768908471046022,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b904d30a44673800c0c3034a976f6ac03bbb3ec299f6d92bb1a5c6ea170a7c57,PodSandboxId:ff134a61cd64ed6f5542d7c8c8469ae269bd32b9fdf955f510ccf7e0b5589fb8,Meta
data:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763768907183648311,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dfe8e9b-0142-42cf-ba29-27aeadf91605,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4683ce225f87d35eb79e6cefdb9d7c48be7cc40e3230995b8b0843525d0bd27,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7
d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763768905559322925,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2e4d571c23b9bf
7d4bfb72e3338c12264469ce70e7e72754afd69967515d13,PodSandboxId:5007bb0b80f021b12e7f6a9a43425ddf2f36685c53a522cd08d8b714c4ec20eb,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763768903867338270,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79de7084-4282-49f8-a4d1-582323611ce3,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:d36db081fc46e4a769de524c439df3776fa94dd533d426b7d39c2e1306653d01,PodSandboxId:554f3c9987e3066901d6ca4d92840af21c420dd97f4a4542964dfbb8d915e03e,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763768903014677441,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ht8dl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 85e84c47-6bc3-4409-8954-c24ef4d80f99,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9171d1cd70aebc77
78cc2ae6b609dbe0a17d4a5c28a86a5944f33b666258a45,PodSandboxId:3414eb7a0316ace15e9a899adee597efdbbd854673912b762e969800af6a4f8a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763768902335122991,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xq799,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0cdf45b5-b337-4541-83d3-b7fdc232f1e1,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:37dea366f964b8791137712d958e380263c762d6943592d23f145fad119cd6b5,PodSandboxId:1e73211f223b9745771ca2d0de7f25252821645aa5ecdbedb9590018486b8b7e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900663296805,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-gcprx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cf49f5-ed6e-4aa5-bdfe-2494e5763f39,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f748bb4b27c533f7b87016c8a98346dcf32afaef8d230d73d0764252cbb72f,PodSandboxId:1c267215c3e5b6dac05194130ddb58527757745ce779475a28dd1c84610883b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900552896022,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-r57wx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136bb70d-9950-46db-83d9-09b543dc4f72,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe7bb60492b04fd1b087025749469403adeef011272e8f3f22c00ada731cdcb3,PodSandboxId:15b64b5856939f8bac45fb57619eff6fafc932dc05ba97fe1943560b384bd630,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763768898866225812,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vl5f9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 09a904d5-755f-4f1f-9525-b10e4e4b57a7,},Annotations:map[string]string{io.kubernetes.container.hash
: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bc290854e78459afc859cfdef271a0dcca5688dfdec552b77d3bddd2556238,PodSandboxId:182146df6179175ac72d7de036cbf942c44e60401b302493f3eb9d4f09c65f1c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763768876563315596,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8445af-f050-4525-a580-c6cb45567d21,},Annotations:map[
string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fac18e2a4ec25dc356850c017cf9e035abf1bb8a3d580cf0053ef797061409,PodSandboxId:f1662e37013476f1ac5ede7d21406a137d8e8672c36dcf49068d30172dc639f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763768860250918214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 2855a3de-b990-447c-b094-274b5becf1da,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d414f30f9b272529b258344187ca2317cbc5a4141f4ffe1bc6fa0f7df80bd5bb,PodSandboxId:79f2d64c3813a612bcd08d986b458be1fc1d2f0a3922d4db70b605b18db55f18,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763768857987378401,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pd4s
x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88fffae7-a3c2-46ef-a382-867c1f45dd2f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e880e3438bfbb021c8b745fd3f9eff8bd901a71ba1ac5890af86a6f9ccce7198,PodSandboxId:9607023c4fe8e371d977e1e4a2b52b0e80675b763c0cd2b3ae209db00b96f2cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763768852094801650,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tgk67,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: ad56ae13-a7c4-44e3-a817-73aa300110b6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba59e7c8953d9801a0002a71d6b0d76ceb61d8240d18301e56bb626f422fb40,PodSandboxId:1ce41f042f494eef5d0be46b1db7d599bafdcc97fef69e0d7f1782bd275c54ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6
a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763768851042427654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6jsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c9f1dbf-19b7-4f19-8f33-11b6886f1237,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d89e7dd43a03bbcb90d35b0084800be3d656bfa6da9cea1a1cc88c7dc51493d,PodSandboxId:a6e11d2b9834f78610e1487003d17adb09fe565622e235df1217303546294639,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763768839147876729,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d9c952f476472031bed61db83e3c978,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5891e44197cae1d6761a70d34df341e353e30c01b0d5a7326a9560ace3813e,PodSandboxId:212b2600cae8f28bb69999f005d0b52383e46a97173ab430efc504c81b7dd8ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSp
ec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763768839130699997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca8736316ec035d06c4ec08eb70b85a,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2349c8754b0dab18ba46ec5bfeab82bbf19463f511389a62129f35aebcece7,PodSandboxId:7fb7e928bee471d4ec16ce9fb677f233172c530080bee3cf5baf5b1
c28826973,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763768839107116372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0a4452486189aa3ae5b593dc3a43cac,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a216f1821ac9afeecc64f7303745
36676eea6c11ca4c8f9bde7913e3b74b219,PodSandboxId:43d68a4f9086a85d2e8c987fa17f958b95725f1d9f37b716010e99d58676be1a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763768839078348144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e0cb0275164bea778c164d97826c53,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b99bbaa3-2b86-4b06-ab3b-aac04dce552c name=/runtime.v1.RuntimeService/ListContainers
Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.634514148Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4dbfa20a-d21c-4ac6-97be-75e76f3f1df4 name=/runtime.v1.RuntimeService/Version
Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.634800173Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4dbfa20a-d21c-4ac6-97be-75e76f3f1df4 name=/runtime.v1.RuntimeService/Version
Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.636504279Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a1b2d423-d513-49b6-83ec-8dc34cad77bf name=/runtime.v1.ImageService/ImageFsInfo
Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.638000468Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763769105637972480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:532066,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1b2d423-d513-49b6-83ec-8dc34cad77bf name=/runtime.v1.ImageService/ImageFsInfo
Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.639058522Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=140fc03c-5931-4bd9-ad2b-735f2828185a name=/runtime.v1.RuntimeService/ListContainers
Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.639123699Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=140fc03c-5931-4bd9-ad2b-735f2828185a name=/runtime.v1.RuntimeService/ListContainers
Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.639594722Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:991f92b0bd577e0738eef35d65b7c9638d3df53ccea140bf89bfa778f911574f,PodSandboxId:f7f9ecdee49d2fbed73c95266531e98528194cc29d1af1e15c07c6a5e790026a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763768959705557696,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e3ed8c7-5788-4d41-aba1-71043fc65fb1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1205f66bfddc482ac5d5dd1e86224c67e867616e61123b413f3bb6856473dc12,PodSandboxId:7a5080c12c12abdaab939ff714e1e919b116a416135fb5a4c0954b2d73aa77d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763768941325630164,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6b5956ac-11bb-458f-953a-f0fa68bf575e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f168b4d8b7da74efedb3be41e39c6d07020b9698695572f30b74f190a4d8dac,PodSandboxId:6d7ec67173c108730a451b146148cf342b99db36b05e9ea513110f7e26d0585e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763768932397704115,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-lg7z6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6a72ef50-d6e3-496e-bb81-685892037954,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:51813a3108d9e54ce1c3496176ac5114e7bd1188f2c3673f4a4a3480910eced6,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351f
d438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763768914769733150,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491a8ff7c586acede1f8b3b37821df605946465cc57d997a527260e81bc84cbe,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763768913042350736,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62345e24511bafa136f68a223ce7ed0c511a449ccaac17d536939d218364c8e0,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763768911235493555,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c36592147c998eee903b15108ec385188d6a10ba82bdbc75a1e806aedb354e7,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c
9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763768910213861282,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552ab85d759ae6b592b4d62982e120e0f046fdad6cf73d39fa8e079973301b19,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt
:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763768908471046022,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b904d30a44673800c0c3034a976f6ac03bbb3ec299f6d92bb1a5c6ea170a7c57,PodSandboxId:ff134a61cd64ed6f5542d7c8c8469ae269bd32b9fdf955f510ccf7e0b5589fb8,Meta
data:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763768907183648311,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dfe8e9b-0142-42cf-ba29-27aeadf91605,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4683ce225f87d35eb79e6cefdb9d7c48be7cc40e3230995b8b0843525d0bd27,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7
d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763768905559322925,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2e4d571c23b9bf
7d4bfb72e3338c12264469ce70e7e72754afd69967515d13,PodSandboxId:5007bb0b80f021b12e7f6a9a43425ddf2f36685c53a522cd08d8b714c4ec20eb,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763768903867338270,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79de7084-4282-49f8-a4d1-582323611ce3,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:d36db081fc46e4a769de524c439df3776fa94dd533d426b7d39c2e1306653d01,PodSandboxId:554f3c9987e3066901d6ca4d92840af21c420dd97f4a4542964dfbb8d915e03e,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763768903014677441,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ht8dl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 85e84c47-6bc3-4409-8954-c24ef4d80f99,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9171d1cd70aebc77
78cc2ae6b609dbe0a17d4a5c28a86a5944f33b666258a45,PodSandboxId:3414eb7a0316ace15e9a899adee597efdbbd854673912b762e969800af6a4f8a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763768902335122991,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xq799,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0cdf45b5-b337-4541-83d3-b7fdc232f1e1,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:37dea366f964b8791137712d958e380263c762d6943592d23f145fad119cd6b5,PodSandboxId:1e73211f223b9745771ca2d0de7f25252821645aa5ecdbedb9590018486b8b7e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900663296805,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-gcprx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cf49f5-ed6e-4aa5-bdfe-2494e5763f39,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f748bb4b27c533f7b87016c8a98346dcf32afaef8d230d73d0764252cbb72f,PodSandboxId:1c267215c3e5b6dac05194130ddb58527757745ce779475a28dd1c84610883b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900552896022,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-r57wx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136bb70d-9950-46db-83d9-09b543dc4f72,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe7bb60492b04fd1b087025749469403adeef011272e8f3f22c00ada731cdcb3,PodSandboxId:15b64b5856939f8bac45fb57619eff6fafc932dc05ba97fe1943560b384bd630,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763768898866225812,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vl5f9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 09a904d5-755f-4f1f-9525-b10e4e4b57a7,},Annotations:map[string]string{io.kubernetes.container.hash
: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bc290854e78459afc859cfdef271a0dcca5688dfdec552b77d3bddd2556238,PodSandboxId:182146df6179175ac72d7de036cbf942c44e60401b302493f3eb9d4f09c65f1c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763768876563315596,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8445af-f050-4525-a580-c6cb45567d21,},Annotations:map[
string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fac18e2a4ec25dc356850c017cf9e035abf1bb8a3d580cf0053ef797061409,PodSandboxId:f1662e37013476f1ac5ede7d21406a137d8e8672c36dcf49068d30172dc639f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763768860250918214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 2855a3de-b990-447c-b094-274b5becf1da,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d414f30f9b272529b258344187ca2317cbc5a4141f4ffe1bc6fa0f7df80bd5bb,PodSandboxId:79f2d64c3813a612bcd08d986b458be1fc1d2f0a3922d4db70b605b18db55f18,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763768857987378401,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pd4s
x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88fffae7-a3c2-46ef-a382-867c1f45dd2f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e880e3438bfbb021c8b745fd3f9eff8bd901a71ba1ac5890af86a6f9ccce7198,PodSandboxId:9607023c4fe8e371d977e1e4a2b52b0e80675b763c0cd2b3ae209db00b96f2cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763768852094801650,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tgk67,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: ad56ae13-a7c4-44e3-a817-73aa300110b6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba59e7c8953d9801a0002a71d6b0d76ceb61d8240d18301e56bb626f422fb40,PodSandboxId:1ce41f042f494eef5d0be46b1db7d599bafdcc97fef69e0d7f1782bd275c54ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6
a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763768851042427654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6jsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c9f1dbf-19b7-4f19-8f33-11b6886f1237,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d89e7dd43a03bbcb90d35b0084800be3d656bfa6da9cea1a1cc88c7dc51493d,PodSandboxId:a6e11d2b9834f78610e1487003d17adb09fe565622e235df1217303546294639,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763768839147876729,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d9c952f476472031bed61db83e3c978,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5891e44197cae1d6761a70d34df341e353e30c01b0d5a7326a9560ace3813e,PodSandboxId:212b2600cae8f28bb69999f005d0b52383e46a97173ab430efc504c81b7dd8ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSp
ec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763768839130699997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca8736316ec035d06c4ec08eb70b85a,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2349c8754b0dab18ba46ec5bfeab82bbf19463f511389a62129f35aebcece7,PodSandboxId:7fb7e928bee471d4ec16ce9fb677f233172c530080bee3cf5baf5b1
c28826973,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763768839107116372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0a4452486189aa3ae5b593dc3a43cac,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a216f1821ac9afeecc64f7303745
36676eea6c11ca4c8f9bde7913e3b74b219,PodSandboxId:43d68a4f9086a85d2e8c987fa17f958b95725f1d9f37b716010e99d58676be1a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763768839078348144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e0cb0275164bea778c164d97826c53,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=140fc03c-5931-4bd9-ad2b-735f2828185a name=/runtime.v1.RuntimeService/ListContainers
Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.673498692Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dba9004b-37cb-4d1a-af77-2d995e943938 name=/runtime.v1.RuntimeService/Version
Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.673614073Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dba9004b-37cb-4d1a-af77-2d995e943938 name=/runtime.v1.RuntimeService/Version
Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.675790250Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=55ad8030-2fda-4c41-9313-714466988958 name=/runtime.v1.ImageService/ImageFsInfo
Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.677357697Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763769105677268717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:532066,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=55ad8030-2fda-4c41-9313-714466988958 name=/runtime.v1.ImageService/ImageFsInfo
Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.678703487Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7529c168-9b69-4604-ad08-95ce4d9a7fa4 name=/runtime.v1.RuntimeService/ListContainers
Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.678783145Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7529c168-9b69-4604-ad08-95ce4d9a7fa4 name=/runtime.v1.RuntimeService/ListContainers
Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.679341937Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:991f92b0bd577e0738eef35d65b7c9638d3df53ccea140bf89bfa778f911574f,PodSandboxId:f7f9ecdee49d2fbed73c95266531e98528194cc29d1af1e15c07c6a5e790026a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763768959705557696,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e3ed8c7-5788-4d41-aba1-71043fc65fb1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1205f66bfddc482ac5d5dd1e86224c67e867616e61123b413f3bb6856473dc12,PodSandboxId:7a5080c12c12abdaab939ff714e1e919b116a416135fb5a4c0954b2d73aa77d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763768941325630164,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6b5956ac-11bb-458f-953a-f0fa68bf575e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f168b4d8b7da74efedb3be41e39c6d07020b9698695572f30b74f190a4d8dac,PodSandboxId:6d7ec67173c108730a451b146148cf342b99db36b05e9ea513110f7e26d0585e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763768932397704115,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-lg7z6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6a72ef50-d6e3-496e-bb81-685892037954,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:51813a3108d9e54ce1c3496176ac5114e7bd1188f2c3673f4a4a3480910eced6,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351f
d438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763768914769733150,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491a8ff7c586acede1f8b3b37821df605946465cc57d997a527260e81bc84cbe,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763768913042350736,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62345e24511bafa136f68a223ce7ed0c511a449ccaac17d536939d218364c8e0,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763768911235493555,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c36592147c998eee903b15108ec385188d6a10ba82bdbc75a1e806aedb354e7,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c
9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763768910213861282,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552ab85d759ae6b592b4d62982e120e0f046fdad6cf73d39fa8e079973301b19,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt
:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763768908471046022,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b904d30a44673800c0c3034a976f6ac03bbb3ec299f6d92bb1a5c6ea170a7c57,PodSandboxId:ff134a61cd64ed6f5542d7c8c8469ae269bd32b9fdf955f510ccf7e0b5589fb8,Meta
data:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763768907183648311,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dfe8e9b-0142-42cf-ba29-27aeadf91605,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4683ce225f87d35eb79e6cefdb9d7c48be7cc40e3230995b8b0843525d0bd27,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7
d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763768905559322925,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2e4d571c23b9bf
7d4bfb72e3338c12264469ce70e7e72754afd69967515d13,PodSandboxId:5007bb0b80f021b12e7f6a9a43425ddf2f36685c53a522cd08d8b714c4ec20eb,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763768903867338270,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79de7084-4282-49f8-a4d1-582323611ce3,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:d36db081fc46e4a769de524c439df3776fa94dd533d426b7d39c2e1306653d01,PodSandboxId:554f3c9987e3066901d6ca4d92840af21c420dd97f4a4542964dfbb8d915e03e,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763768903014677441,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ht8dl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 85e84c47-6bc3-4409-8954-c24ef4d80f99,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9171d1cd70aebc77
78cc2ae6b609dbe0a17d4a5c28a86a5944f33b666258a45,PodSandboxId:3414eb7a0316ace15e9a899adee597efdbbd854673912b762e969800af6a4f8a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763768902335122991,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xq799,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0cdf45b5-b337-4541-83d3-b7fdc232f1e1,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:37dea366f964b8791137712d958e380263c762d6943592d23f145fad119cd6b5,PodSandboxId:1e73211f223b9745771ca2d0de7f25252821645aa5ecdbedb9590018486b8b7e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900663296805,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-gcprx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cf49f5-ed6e-4aa5-bdfe-2494e5763f39,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f748bb4b27c533f7b87016c8a98346dcf32afaef8d230d73d0764252cbb72f,PodSandboxId:1c267215c3e5b6dac05194130ddb58527757745ce779475a28dd1c84610883b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900552896022,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-r57wx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136bb70d-9950-46db-83d9-09b543dc4f72,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe7bb60492b04fd1b087025749469403adeef011272e8f3f22c00ada731cdcb3,PodSandboxId:15b64b5856939f8bac45fb57619eff6fafc932dc05ba97fe1943560b384bd630,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763768898866225812,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vl5f9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 09a904d5-755f-4f1f-9525-b10e4e4b57a7,},Annotations:map[string]string{io.kubernetes.container.hash
: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bc290854e78459afc859cfdef271a0dcca5688dfdec552b77d3bddd2556238,PodSandboxId:182146df6179175ac72d7de036cbf942c44e60401b302493f3eb9d4f09c65f1c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763768876563315596,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8445af-f050-4525-a580-c6cb45567d21,},Annotations:map[
string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fac18e2a4ec25dc356850c017cf9e035abf1bb8a3d580cf0053ef797061409,PodSandboxId:f1662e37013476f1ac5ede7d21406a137d8e8672c36dcf49068d30172dc639f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763768860250918214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 2855a3de-b990-447c-b094-274b5becf1da,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d414f30f9b272529b258344187ca2317cbc5a4141f4ffe1bc6fa0f7df80bd5bb,PodSandboxId:79f2d64c3813a612bcd08d986b458be1fc1d2f0a3922d4db70b605b18db55f18,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763768857987378401,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pd4s
x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88fffae7-a3c2-46ef-a382-867c1f45dd2f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e880e3438bfbb021c8b745fd3f9eff8bd901a71ba1ac5890af86a6f9ccce7198,PodSandboxId:9607023c4fe8e371d977e1e4a2b52b0e80675b763c0cd2b3ae209db00b96f2cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763768852094801650,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tgk67,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: ad56ae13-a7c4-44e3-a817-73aa300110b6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba59e7c8953d9801a0002a71d6b0d76ceb61d8240d18301e56bb626f422fb40,PodSandboxId:1ce41f042f494eef5d0be46b1db7d599bafdcc97fef69e0d7f1782bd275c54ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6
a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763768851042427654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6jsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c9f1dbf-19b7-4f19-8f33-11b6886f1237,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d89e7dd43a03bbcb90d35b0084800be3d656bfa6da9cea1a1cc88c7dc51493d,PodSandboxId:a6e11d2b9834f78610e1487003d17adb09fe565622e235df1217303546294639,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763768839147876729,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d9c952f476472031bed61db83e3c978,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5891e44197cae1d6761a70d34df341e353e30c01b0d5a7326a9560ace3813e,PodSandboxId:212b2600cae8f28bb69999f005d0b52383e46a97173ab430efc504c81b7dd8ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSp
ec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763768839130699997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca8736316ec035d06c4ec08eb70b85a,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2349c8754b0dab18ba46ec5bfeab82bbf19463f511389a62129f35aebcece7,PodSandboxId:7fb7e928bee471d4ec16ce9fb677f233172c530080bee3cf5baf5b1
c28826973,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763768839107116372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0a4452486189aa3ae5b593dc3a43cac,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a216f1821ac9afeecc64f7303745
36676eea6c11ca4c8f9bde7913e3b74b219,PodSandboxId:43d68a4f9086a85d2e8c987fa17f958b95725f1d9f37b716010e99d58676be1a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763768839078348144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e0cb0275164bea778c164d97826c53,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7529c168-9b69-4604-ad08-95ce4d9a7fa4 name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
991f92b0bd577 docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 2 minutes ago Running nginx 0 f7f9ecdee49d2 nginx default
1205f66bfddc4 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 2 minutes ago Running busybox 0 7a5080c12c12a busybox default
3f168b4d8b7da registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27 2 minutes ago Running controller 0 6d7ec67173c10 ingress-nginx-controller-6c8bf45fb-lg7z6 ingress-nginx
51813a3108d9e registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f 3 minutes ago Running csi-snapshotter 0 d6163d79acc66 csi-hostpathplugin-gvwq9 kube-system
491a8ff7c586a registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7 3 minutes ago Running csi-provisioner 0 d6163d79acc66 csi-hostpathplugin-gvwq9 kube-system
62345e24511ba registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6 3 minutes ago Running liveness-probe 0 d6163d79acc66 csi-hostpathplugin-gvwq9 kube-system
4c36592147c99 registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11 3 minutes ago Running hostpath 0 d6163d79acc66 csi-hostpathplugin-gvwq9 kube-system
552ab85d759ae registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc 3 minutes ago Running node-driver-registrar 0 d6163d79acc66 csi-hostpathplugin-gvwq9 kube-system
b904d30a44673 registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0 3 minutes ago Running csi-attacher 0 ff134a61cd64e csi-hostpath-attacher-0 kube-system
b4683ce225f87 registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864 3 minutes ago Running csi-external-health-monitor-controller 0 d6163d79acc66 csi-hostpathplugin-gvwq9 kube-system
ea2e4d571c23b registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8 3 minutes ago Running csi-resizer 0 5007bb0b80f02 csi-hostpath-resizer-0 kube-system
d36db081fc46e 884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45 3 minutes ago Exited patch 1 554f3c9987e30 ingress-nginx-admission-patch-ht8dl ingress-nginx
d9171d1cd70ae registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f 3 minutes ago Exited create 0 3414eb7a0316a ingress-nginx-admission-create-xq799 ingress-nginx
37dea366f964b registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922 3 minutes ago Running volume-snapshot-controller 0 1e73211f223b9 snapshot-controller-7d9fbc56b8-gcprx kube-system
16f748bb4b27c registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922 3 minutes ago Running volume-snapshot-controller 0 1c267215c3e5b snapshot-controller-7d9fbc56b8-r57wx kube-system
fe7bb60492b04 docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef 3 minutes ago Running local-path-provisioner 0 15b64b5856939 local-path-provisioner-648f6765c9-vl5f9 local-path-storage
e7bc290854e78 docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 3 minutes ago Running minikube-ingress-dns 0 182146df61791 kube-ingress-dns-minikube kube-system
62fac18e2a4ec 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 f1662e3701347 storage-provisioner kube-system
d414f30f9b272 docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 79f2d64c3813a amd-gpu-device-plugin-pd4sx kube-system
e880e3438bfbb 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969 4 minutes ago Running coredns 0 9607023c4fe8e coredns-66bc5c9577-tgk67 kube-system
9ba59e7c8953d fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7 4 minutes ago Running kube-proxy 0 1ce41f042f494 kube-proxy-d6jsf kube-system
8d89e7dd43a03 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813 4 minutes ago Running kube-scheduler 0 a6e11d2b9834f kube-scheduler-addons-266876 kube-system
5c5891e44197c 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115 4 minutes ago Running etcd 0 212b2600cae8f etcd-addons-266876 kube-system
9b2349c8754b0 c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97 4 minutes ago Running kube-apiserver 0 7fb7e928bee47 kube-apiserver-addons-266876 kube-system
3a216f1821ac9 c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f 4 minutes ago Running kube-controller-manager 0 43d68a4f9086a kube-controller-manager-addons-266876 kube-system
==> coredns [e880e3438bfbb021c8b745fd3f9eff8bd901a71ba1ac5890af86a6f9ccce7198] <==
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
[INFO] Reloading complete
[INFO] 10.244.0.23:38034 - 36875 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000620114s
[INFO] 10.244.0.23:40973 - 25486 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000178533s
[INFO] 10.244.0.23:41681 - 40964 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000163049s
[INFO] 10.244.0.23:47936 - 55627 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000146061s
[INFO] 10.244.0.23:57173 - 44150 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.001233146s
[INFO] 10.244.0.23:48993 - 8029 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000276551s
[INFO] 10.244.0.23:50684 - 42721 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001523821s
[INFO] 10.244.0.23:45784 - 22668 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001444737s
[INFO] 10.244.0.27:39628 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000320107s
[INFO] 10.244.0.27:34513 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000140101s
==> describe nodes <==
Name: addons-266876
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-266876
kubernetes.io/os=linux
minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
minikube.k8s.io/name=addons-266876
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_11_21T23_47_26_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-266876
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-266876"}
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 21 Nov 2025 23:47:22 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-266876
AcquireTime: <unset>
RenewTime: Fri, 21 Nov 2025 23:51:41 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 21 Nov 2025 23:49:58 +0000 Fri, 21 Nov 2025 23:47:19 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 21 Nov 2025 23:49:58 +0000 Fri, 21 Nov 2025 23:47:19 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 21 Nov 2025 23:49:58 +0000 Fri, 21 Nov 2025 23:47:19 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 21 Nov 2025 23:49:58 +0000 Fri, 21 Nov 2025 23:47:26 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.50
Hostname: addons-266876
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
System Info:
Machine ID: c4a95d5c27154bec8bc2a50909bf4217
System UUID: c4a95d5c-2715-4bec-8bc2-a50909bf4217
Boot ID: 7afcec11-c11b-4436-b252-c2dac139e51f
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.34.1
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (20 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m48s
default hello-world-app-5d498dc89-sqvxb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m30s
default task-pv-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m5s
ingress-nginx ingress-nginx-controller-6c8bf45fb-lg7z6 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m7s
kube-system amd-gpu-device-plugin-pd4sx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m13s
kube-system coredns-66bc5c9577-tgk67 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m16s
kube-system csi-hostpath-attacher-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m4s
kube-system csi-hostpath-resizer-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m4s
kube-system csi-hostpathplugin-gvwq9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m4s
kube-system etcd-addons-266876 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m22s
kube-system kube-apiserver-addons-266876 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m21s
kube-system kube-controller-manager-addons-266876 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m21s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m10s
kube-system kube-proxy-d6jsf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m16s
kube-system kube-scheduler-addons-266876 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m21s
kube-system snapshot-controller-7d9fbc56b8-gcprx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m5s
kube-system snapshot-controller-7d9fbc56b8-r57wx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m5s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m9s
local-path-storage local-path-provisioner-648f6765c9-vl5f9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m8s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m13s kube-proxy
Normal Starting 4m28s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m28s (x8 over 4m28s) kubelet Node addons-266876 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m28s (x8 over 4m28s) kubelet Node addons-266876 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m28s (x7 over 4m28s) kubelet Node addons-266876 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m28s kubelet Updated Node Allocatable limit across pods
Normal Starting 4m21s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m21s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m21s kubelet Node addons-266876 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m21s kubelet Node addons-266876 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m21s kubelet Node addons-266876 status is now: NodeHasSufficientPID
Normal NodeReady 4m20s kubelet Node addons-266876 status is now: NodeReady
Normal RegisteredNode 4m17s node-controller Node addons-266876 event: Registered Node addons-266876 in Controller
==> dmesg <==
[ +0.350546] kauditd_printk_skb: 18 callbacks suppressed
[ +1.615207] kauditd_printk_skb: 297 callbacks suppressed
[ +1.386553] kauditd_printk_skb: 314 callbacks suppressed
[ +3.245635] kauditd_printk_skb: 404 callbacks suppressed
[ +8.078733] kauditd_printk_skb: 5 callbacks suppressed
[Nov21 23:48] kauditd_printk_skb: 5 callbacks suppressed
[ +5.490595] kauditd_printk_skb: 26 callbacks suppressed
[ +6.260482] kauditd_printk_skb: 38 callbacks suppressed
[ +5.041216] kauditd_printk_skb: 113 callbacks suppressed
[ +5.004515] kauditd_printk_skb: 80 callbacks suppressed
[ +3.836804] kauditd_printk_skb: 136 callbacks suppressed
[ +4.200452] kauditd_printk_skb: 82 callbacks suppressed
[ +0.000029] kauditd_printk_skb: 5 callbacks suppressed
[ +0.000031] kauditd_printk_skb: 29 callbacks suppressed
[ +5.254098] kauditd_printk_skb: 53 callbacks suppressed
[Nov21 23:49] kauditd_printk_skb: 47 callbacks suppressed
[ +9.475817] kauditd_printk_skb: 17 callbacks suppressed
[ +5.686428] kauditd_printk_skb: 22 callbacks suppressed
[ +4.598673] kauditd_printk_skb: 95 callbacks suppressed
[ +1.253211] kauditd_printk_skb: 79 callbacks suppressed
[ +0.652321] kauditd_printk_skb: 101 callbacks suppressed
[ +0.000031] kauditd_printk_skb: 20 callbacks suppressed
[ +9.880165] kauditd_printk_skb: 114 callbacks suppressed
[Nov21 23:51] kauditd_printk_skb: 22 callbacks suppressed
[ +0.811687] kauditd_printk_skb: 51 callbacks suppressed
==> etcd [5c5891e44197cae1d6761a70d34df341e353e30c01b0d5a7326a9560ace3813e] <==
{"level":"info","ts":"2025-11-21T23:47:56.169034Z","caller":"traceutil/trace.go:172","msg":"trace[503038924] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:933; }","duration":"252.581534ms","start":"2025-11-21T23:47:55.916448Z","end":"2025-11-21T23:47:56.169029Z","steps":["trace[503038924] 'agreement among raft nodes before linearized reading' (duration: 252.55235ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-21T23:47:59.352083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57198","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-21T23:47:59.363648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57214","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-21T23:47:59.513514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57238","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-21T23:47:59.589561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57252","server-name":"","error":"EOF"}
{"level":"info","ts":"2025-11-21T23:48:08.513782Z","caller":"traceutil/trace.go:172","msg":"trace[715400112] transaction","detail":"{read_only:false; response_revision:978; number_of_response:1; }","duration":"116.418162ms","start":"2025-11-21T23:48:08.397351Z","end":"2025-11-21T23:48:08.513770Z","steps":["trace[715400112] 'process raft request' (duration: 116.119443ms)"],"step_count":1}
{"level":"info","ts":"2025-11-21T23:48:10.824125Z","caller":"traceutil/trace.go:172","msg":"trace[2036679806] linearizableReadLoop","detail":"{readStateIndex:1014; appliedIndex:1014; }","duration":"203.849321ms","start":"2025-11-21T23:48:10.620261Z","end":"2025-11-21T23:48:10.824110Z","steps":["trace[2036679806] 'read index received' (duration: 203.843953ms)","trace[2036679806] 'applied index is now lower than readState.Index' (duration: 4.512µs)"],"step_count":2}
{"level":"warn","ts":"2025-11-21T23:48:10.824235Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"203.952821ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-21T23:48:10.824255Z","caller":"traceutil/trace.go:172","msg":"trace[1038609178] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:986; }","duration":"203.992763ms","start":"2025-11-21T23:48:10.620257Z","end":"2025-11-21T23:48:10.824249Z","steps":["trace[1038609178] 'agreement among raft nodes before linearized reading' (duration: 203.924903ms)"],"step_count":1}
{"level":"info","ts":"2025-11-21T23:48:10.827067Z","caller":"traceutil/trace.go:172","msg":"trace[958942931] transaction","detail":"{read_only:false; response_revision:987; number_of_response:1; }","duration":"216.790232ms","start":"2025-11-21T23:48:10.610267Z","end":"2025-11-21T23:48:10.827057Z","steps":["trace[958942931] 'process raft request' (duration: 213.950708ms)"],"step_count":1}
{"level":"info","ts":"2025-11-21T23:48:17.235529Z","caller":"traceutil/trace.go:172","msg":"trace[2072959660] linearizableReadLoop","detail":"{readStateIndex:1040; appliedIndex:1040; }","duration":"118.859084ms","start":"2025-11-21T23:48:17.116651Z","end":"2025-11-21T23:48:17.235510Z","steps":["trace[2072959660] 'read index received' (duration: 118.853824ms)","trace[2072959660] 'applied index is now lower than readState.Index' (duration: 4.479µs)"],"step_count":2}
{"level":"warn","ts":"2025-11-21T23:48:17.235633Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.964818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-21T23:48:17.235650Z","caller":"traceutil/trace.go:172","msg":"trace[1291312129] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1011; }","duration":"118.997232ms","start":"2025-11-21T23:48:17.116647Z","end":"2025-11-21T23:48:17.235645Z","steps":["trace[1291312129] 'agreement among raft nodes before linearized reading' (duration: 118.929178ms)"],"step_count":1}
{"level":"info","ts":"2025-11-21T23:48:17.236014Z","caller":"traceutil/trace.go:172","msg":"trace[409496112] transaction","detail":"{read_only:false; response_revision:1012; number_of_response:1; }","duration":"245.19274ms","start":"2025-11-21T23:48:16.990813Z","end":"2025-11-21T23:48:17.236006Z","steps":["trace[409496112] 'process raft request' (duration: 245.052969ms)"],"step_count":1}
{"level":"info","ts":"2025-11-21T23:48:20.410362Z","caller":"traceutil/trace.go:172","msg":"trace[828505748] transaction","detail":"{read_only:false; response_revision:1027; number_of_response:1; }","duration":"157.893848ms","start":"2025-11-21T23:48:20.252456Z","end":"2025-11-21T23:48:20.410350Z","steps":["trace[828505748] 'process raft request' (duration: 157.749487ms)"],"step_count":1}
{"level":"info","ts":"2025-11-21T23:48:26.972869Z","caller":"traceutil/trace.go:172","msg":"trace[583749754] transaction","detail":"{read_only:false; response_revision:1085; number_of_response:1; }","duration":"180.54926ms","start":"2025-11-21T23:48:26.792295Z","end":"2025-11-21T23:48:26.972845Z","steps":["trace[583749754] 'process raft request' (duration: 180.444491ms)"],"step_count":1}
{"level":"info","ts":"2025-11-21T23:48:55.718332Z","caller":"traceutil/trace.go:172","msg":"trace[218102785] linearizableReadLoop","detail":"{readStateIndex:1235; appliedIndex:1235; }","duration":"102.447461ms","start":"2025-11-21T23:48:55.615863Z","end":"2025-11-21T23:48:55.718310Z","steps":["trace[218102785] 'read index received' (duration: 102.442519ms)","trace[218102785] 'applied index is now lower than readState.Index' (duration: 4.145µs)"],"step_count":2}
{"level":"warn","ts":"2025-11-21T23:48:55.718517Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.662851ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-21T23:48:55.718556Z","caller":"traceutil/trace.go:172","msg":"trace[280205783] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1197; }","duration":"102.741104ms","start":"2025-11-21T23:48:55.615807Z","end":"2025-11-21T23:48:55.718548Z","steps":["trace[280205783] 'agreement among raft nodes before linearized reading' (duration: 102.634025ms)"],"step_count":1}
{"level":"info","ts":"2025-11-21T23:48:55.718853Z","caller":"traceutil/trace.go:172","msg":"trace[1563407473] transaction","detail":"{read_only:false; response_revision:1198; number_of_response:1; }","duration":"160.082369ms","start":"2025-11-21T23:48:55.558762Z","end":"2025-11-21T23:48:55.718844Z","steps":["trace[1563407473] 'process raft request' (duration: 160.006081ms)"],"step_count":1}
{"level":"info","ts":"2025-11-21T23:49:25.230279Z","caller":"traceutil/trace.go:172","msg":"trace[1746671191] transaction","detail":"{read_only:false; response_revision:1422; number_of_response:1; }","duration":"130.337483ms","start":"2025-11-21T23:49:25.099914Z","end":"2025-11-21T23:49:25.230251Z","steps":["trace[1746671191] 'process raft request' (duration: 128.456166ms)"],"step_count":1}
{"level":"info","ts":"2025-11-21T23:49:31.443123Z","caller":"traceutil/trace.go:172","msg":"trace[1229097043] linearizableReadLoop","detail":"{readStateIndex:1512; appliedIndex:1512; }","duration":"121.2839ms","start":"2025-11-21T23:49:31.321821Z","end":"2025-11-21T23:49:31.443104Z","steps":["trace[1229097043] 'read index received' (duration: 121.277728ms)","trace[1229097043] 'applied index is now lower than readState.Index' (duration: 4.966µs)"],"step_count":2}
{"level":"warn","ts":"2025-11-21T23:49:31.443287Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.446592ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-21T23:49:31.443311Z","caller":"traceutil/trace.go:172","msg":"trace[1460275697] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1465; }","duration":"121.507541ms","start":"2025-11-21T23:49:31.321797Z","end":"2025-11-21T23:49:31.443305Z","steps":["trace[1460275697] 'agreement among raft nodes before linearized reading' (duration: 121.416565ms)"],"step_count":1}
{"level":"info","ts":"2025-11-21T23:49:31.444122Z","caller":"traceutil/trace.go:172","msg":"trace[1873839518] transaction","detail":"{read_only:false; response_revision:1466; number_of_response:1; }","duration":"152.736081ms","start":"2025-11-21T23:49:31.291375Z","end":"2025-11-21T23:49:31.444111Z","steps":["trace[1873839518] 'process raft request' (duration: 152.523387ms)"],"step_count":1}
==> kernel <==
23:51:46 up 4 min, 0 users, load average: 0.48, 1.30, 0.69
Linux addons-266876 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [9b2349c8754b0dab18ba46ec5bfeab82bbf19463f511389a62129f35aebcece7] <==
W1121 23:47:42.366369 1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1121 23:47:42.410842 1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
I1121 23:47:42.649275 1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.103.205.35"}
I1121 23:47:44.226888 1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.96.203.27"}
W1121 23:47:59.343614 1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
W1121 23:47:59.366318 1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1121 23:47:59.513667 1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1121 23:47:59.564438 1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
W1121 23:48:11.667772 1 handler_proxy.go:99] no RequestInfo found in the context
E1121 23:48:11.669231 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.6.129:443: connect: connection refused" logger="UnhandledError"
E1121 23:48:11.670277 1 controller.go:146] "Unhandled Error" err=<
Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
E1121 23:48:11.672393 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.6.129:443: connect: connection refused" logger="UnhandledError"
E1121 23:48:11.677441 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.6.129:443: connect: connection refused" logger="UnhandledError"
E1121 23:48:11.699611 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.6.129:443: connect: connection refused" logger="UnhandledError"
I1121 23:48:11.830392 1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E1121 23:49:07.600969 1 conn.go:339] Error on socket receive: read tcp 192.168.39.50:8443->192.168.39.1:39632: use of closed network connection
E1121 23:49:07.806030 1 conn.go:339] Error on socket receive: read tcp 192.168.39.50:8443->192.168.39.1:39646: use of closed network connection
I1121 23:49:16.529402 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
I1121 23:49:16.732737 1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.151.240"}
I1121 23:49:17.182251 1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.21.116"}
I1121 23:50:12.699613 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I1121 23:51:44.509660 1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.217.27"}
==> kube-controller-manager [3a216f1821ac9afeecc64f730374536676eea6c11ca4c8f9bde7913e3b74b219] <==
I1121 23:47:29.337483 1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
I1121 23:47:29.337520 1 shared_informer.go:356] "Caches are synced" controller="GC"
I1121 23:47:29.337577 1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
I1121 23:47:29.337666 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
I1121 23:47:29.338254 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
I1121 23:47:29.338833 1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
I1121 23:47:29.339107 1 shared_informer.go:356] "Caches are synced" controller="expand"
I1121 23:47:29.340477 1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
I1121 23:47:29.340506 1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
I1121 23:47:29.341040 1 shared_informer.go:356] "Caches are synced" controller="stateful set"
I1121 23:47:29.343803 1 shared_informer.go:356] "Caches are synced" controller="namespace"
I1121 23:47:29.357152 1 shared_informer.go:356] "Caches are synced" controller="job"
I1121 23:47:29.371508 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
E1121 23:47:37.577649 1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
E1121 23:47:59.325689 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I1121 23:47:59.326487 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
I1121 23:47:59.326701 1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
I1121 23:47:59.433161 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
I1121 23:47:59.436132 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1121 23:47:59.460324 1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
I1121 23:47:59.669783 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1121 23:49:21.118711 1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
I1121 23:49:39.996446 1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
I1121 23:49:43.075346 1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
I1121 23:49:50.779389 1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
==> kube-proxy [9ba59e7c8953d9801a0002a71d6b0d76ceb61d8240d18301e56bb626f422fb40] <==
I1121 23:47:31.549237 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1121 23:47:31.651147 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1121 23:47:31.651198 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.50"]
E1121 23:47:31.651275 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1121 23:47:31.974605 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1121 23:47:31.975156 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1121 23:47:31.975763 1 server_linux.go:132] "Using iptables Proxier"
I1121 23:47:32.024377 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1121 23:47:32.026629 1 server.go:527] "Version info" version="v1.34.1"
I1121 23:47:32.026711 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1121 23:47:32.034053 1 config.go:200] "Starting service config controller"
I1121 23:47:32.034241 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1121 23:47:32.034262 1 config.go:106] "Starting endpoint slice config controller"
I1121 23:47:32.034266 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1121 23:47:32.034276 1 config.go:403] "Starting serviceCIDR config controller"
I1121 23:47:32.034279 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1121 23:47:32.039494 1 config.go:309] "Starting node config controller"
I1121 23:47:32.039506 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1121 23:47:32.039512 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1121 23:47:32.134526 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1121 23:47:32.134549 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1121 23:47:32.134580 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-scheduler [8d89e7dd43a03bbcb90d35b0084800be3d656bfa6da9cea1a1cc88c7dc51493d] <==
E1121 23:47:22.475530 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1121 23:47:22.475591 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1121 23:47:22.475644 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1121 23:47:22.475674 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1121 23:47:22.475781 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1121 23:47:22.475833 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1121 23:47:22.475877 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1121 23:47:22.476028 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1121 23:47:22.476096 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1121 23:47:23.318227 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1121 23:47:23.496497 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1121 23:47:23.525267 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1121 23:47:23.575530 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1121 23:47:23.578656 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1121 23:47:23.593013 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1121 23:47:23.593144 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1121 23:47:23.685009 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1121 23:47:23.695610 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1121 23:47:23.719024 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1121 23:47:23.735984 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1121 23:47:23.781311 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1121 23:47:23.797047 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1121 23:47:23.818758 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1121 23:47:23.836424 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
I1121 23:47:26.255559 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Nov 21 23:51:09 addons-266876 kubelet[1502]: E1121 23:51:09.776819 1502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="484e38f0-cbc8-4850-8360-07b1ea3e62a0"
Nov 21 23:51:15 addons-266876 kubelet[1502]: E1121 23:51:15.842612 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763769075842103086 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:532066} inodes_used:{value:186}}"
Nov 21 23:51:15 addons-266876 kubelet[1502]: E1121 23:51:15.842636 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763769075842103086 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:532066} inodes_used:{value:186}}"
Nov 21 23:51:25 addons-266876 kubelet[1502]: E1121 23:51:25.845995 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763769085845309750 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:532066} inodes_used:{value:186}}"
Nov 21 23:51:25 addons-266876 kubelet[1502]: E1121 23:51:25.846044 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763769085845309750 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:532066} inodes_used:{value:186}}"
Nov 21 23:51:33 addons-266876 kubelet[1502]: I1121 23:51:33.400743 1502 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-pd4sx" secret="" err="secret \"gcp-auth\" not found"
Nov 21 23:51:35 addons-266876 kubelet[1502]: E1121 23:51:35.848700 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763769095848211281 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:532066} inodes_used:{value:186}}"
Nov 21 23:51:35 addons-266876 kubelet[1502]: E1121 23:51:35.849180 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763769095848211281 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:532066} inodes_used:{value:186}}"
Nov 21 23:51:39 addons-266876 kubelet[1502]: E1121 23:51:39.782023 1502 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
Nov 21 23:51:39 addons-266876 kubelet[1502]: E1121 23:51:39.782100 1502 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
Nov 21 23:51:39 addons-266876 kubelet[1502]: E1121 23:51:39.782332 1502 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-b9a0f343-b333-4fad-87d0-620f3a86218a_local-path-storage(2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb): ErrImagePull: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
Nov 21 23:51:39 addons-266876 kubelet[1502]: E1121 23:51:39.782373 1502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-b9a0f343-b333-4fad-87d0-620f3a86218a" podUID="2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb"
Nov 21 23:51:40 addons-266876 kubelet[1502]: I1121 23:51:40.161405 1502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb-script\") pod \"2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb\" (UID: \"2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb\") "
Nov 21 23:51:40 addons-266876 kubelet[1502]: I1121 23:51:40.161460 1502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68x6v\" (UniqueName: \"kubernetes.io/projected/2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb-kube-api-access-68x6v\") pod \"2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb\" (UID: \"2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb\") "
Nov 21 23:51:40 addons-266876 kubelet[1502]: I1121 23:51:40.161479 1502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb-data\") pod \"2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb\" (UID: \"2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb\") "
Nov 21 23:51:40 addons-266876 kubelet[1502]: I1121 23:51:40.161602 1502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb-data" (OuterVolumeSpecName: "data") pod "2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb" (UID: "2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Nov 21 23:51:40 addons-266876 kubelet[1502]: I1121 23:51:40.162285 1502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb-script" (OuterVolumeSpecName: "script") pod "2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb" (UID: "2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
Nov 21 23:51:40 addons-266876 kubelet[1502]: I1121 23:51:40.164734 1502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb-kube-api-access-68x6v" (OuterVolumeSpecName: "kube-api-access-68x6v") pod "2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb" (UID: "2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb"). InnerVolumeSpecName "kube-api-access-68x6v". PluginName "kubernetes.io/projected", VolumeGIDValue ""
Nov 21 23:51:40 addons-266876 kubelet[1502]: I1121 23:51:40.261911 1502 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-68x6v\" (UniqueName: \"kubernetes.io/projected/2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb-kube-api-access-68x6v\") on node \"addons-266876\" DevicePath \"\""
Nov 21 23:51:40 addons-266876 kubelet[1502]: I1121 23:51:40.262029 1502 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb-data\") on node \"addons-266876\" DevicePath \"\""
Nov 21 23:51:40 addons-266876 kubelet[1502]: I1121 23:51:40.262040 1502 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb-script\") on node \"addons-266876\" DevicePath \"\""
Nov 21 23:51:41 addons-266876 kubelet[1502]: I1121 23:51:41.405192 1502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb" path="/var/lib/kubelet/pods/2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb/volumes"
Nov 21 23:51:44 addons-266876 kubelet[1502]: I1121 23:51:44.498696 1502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhdwl\" (UniqueName: \"kubernetes.io/projected/06b9a800-a9fc-4174-8e6f-34e5c7b7563b-kube-api-access-dhdwl\") pod \"hello-world-app-5d498dc89-sqvxb\" (UID: \"06b9a800-a9fc-4174-8e6f-34e5c7b7563b\") " pod="default/hello-world-app-5d498dc89-sqvxb"
Nov 21 23:51:45 addons-266876 kubelet[1502]: E1121 23:51:45.853093 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763769105852419863 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:532066} inodes_used:{value:186}}"
Nov 21 23:51:45 addons-266876 kubelet[1502]: E1121 23:51:45.853351 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763769105852419863 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:532066} inodes_used:{value:186}}"
==> storage-provisioner [62fac18e2a4ec25dc356850c017cf9e035abf1bb8a3d580cf0053ef797061409] <==
W1121 23:51:20.738101 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1121 23:51:22.741403 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1121 23:51:22.747061 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1121 23:51:24.752110 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1121 23:51:24.761356 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1121 23:51:26.765317 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1121 23:51:26.770812 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1121 23:51:28.774285 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1121 23:51:28.778844 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1121 23:51:30.783252 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1121 23:51:30.792389 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1121 23:51:32.796502 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1121 23:51:32.805042 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1121 23:51:34.809701 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1121 23:51:34.815367 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1121 23:51:36.819307 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1121 23:51:36.825137 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1121 23:51:38.828306 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1121 23:51:38.836273 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1121 23:51:40.840865 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1121 23:51:40.849438 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1121 23:51:42.853477 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1121 23:51:42.861312 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1121 23:51:44.869590 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1121 23:51:44.884265 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-266876 -n addons-266876
helpers_test.go:269: (dbg) Run: kubectl --context addons-266876 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-sqvxb task-pv-pod test-local-path ingress-nginx-admission-create-xq799 ingress-nginx-admission-patch-ht8dl
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context addons-266876 describe pod hello-world-app-5d498dc89-sqvxb task-pv-pod test-local-path ingress-nginx-admission-create-xq799 ingress-nginx-admission-patch-ht8dl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-266876 describe pod hello-world-app-5d498dc89-sqvxb task-pv-pod test-local-path ingress-nginx-admission-create-xq799 ingress-nginx-admission-patch-ht8dl: exit status 1 (97.09513ms)
-- stdout --
Name: hello-world-app-5d498dc89-sqvxb
Namespace: default
Priority: 0
Service Account: default
Node: addons-266876/192.168.39.50
Start Time: Fri, 21 Nov 2025 23:51:44 +0000
Labels: app=hello-world-app
pod-template-hash=5d498dc89
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-world-app-5d498dc89
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dhdwl (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-dhdwl:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2s default-scheduler Successfully assigned default/hello-world-app-5d498dc89-sqvxb to addons-266876
Normal Pulling 2s kubelet Pulling image "docker.io/kicbase/echo-server:1.0"
Name: task-pv-pod
Namespace: default
Priority: 0
Service Account: default
Node: addons-266876/192.168.39.50
Start Time: Fri, 21 Nov 2025 23:49:41 +0000
Labels: app=task-pv-pod
Annotations: <none>
Status: Pending
IP: 10.244.0.29
IPs:
IP: 10.244.0.29
Containers:
task-pv-container:
Container ID:
Image: docker.io/nginx
Image ID:
Port: 80/TCP (http-server)
Host Port: 0/TCP (http-server)
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cj5dd (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
task-pv-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: hpvc
ReadOnly: false
kube-api-access-cj5dd:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m5s default-scheduler Successfully assigned default/task-pv-pod to addons-266876
Warning Failed 37s kubelet Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning Failed 37s kubelet Error: ErrImagePull
Normal BackOff 37s kubelet Back-off pulling image "docker.io/nginx"
Warning Failed 37s kubelet Error: ImagePullBackOff
Normal Pulling 22s (x2 over 2m5s) kubelet Pulling image "docker.io/nginx"
Name: test-local-path
Namespace: default
Priority: 0
Service Account: default
Node: <none>
Labels: run=test-local-path
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
busybox:
Image: busybox:stable
Port: <none>
Host Port: <none>
Command:
sh
-c
echo 'local-path-provisioner' > /test/file1
Environment: <none>
Mounts:
/test from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-24fvr (ro)
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: test-pvc
ReadOnly: false
kube-api-access-24fvr:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-xq799" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-ht8dl" not found
** /stderr **
helpers_test.go:287: kubectl --context addons-266876 describe pod hello-world-app-5d498dc89-sqvxb task-pv-pod test-local-path ingress-nginx-admission-create-xq799 ingress-nginx-admission-patch-ht8dl: exit status 1
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-266876 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-266876 addons disable ingress-dns --alsologtostderr -v=1: (1.183279152s)
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-266876 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-266876 addons disable ingress --alsologtostderr -v=1: (7.816361415s)
--- FAIL: TestAddons/parallel/Ingress (159.73s)