=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:269: (dbg) Run: kubectl --context addons-712341 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:294: (dbg) Run: kubectl --context addons-712341 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:307: (dbg) Run: kubectl --context addons-712341 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:312: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [56885611-8b41-4e56-b6f9-8cc75bfdbfd9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [56885611-8b41-4e56-b6f9-8cc75bfdbfd9] Running
addons_test.go:312: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.005336463s
I1209 01:58:52.641123 258854 kapi.go:150] Service nginx in namespace default found.
addons_test.go:324: (dbg) Run: out/minikube-linux-amd64 -p addons-712341 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:324: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-712341 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.794406633s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:340: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:348: (dbg) Run: kubectl --context addons-712341 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:353: (dbg) Run: out/minikube-linux-amd64 -p addons-712341 ip
addons_test.go:359: (dbg) Run: nslookup hello-john.test 192.168.39.107
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-712341 -n addons-712341
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p addons-712341 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-712341 logs -n 25: (1.306509604s)
helpers_test.go:260: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ delete │ -p download-only-045512 │ download-only-045512 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
│ start │ --download-only -p binary-mirror-413418 --alsologtostderr --binary-mirror http://127.0.0.1:33411 --driver=kvm2 --container-runtime=crio │ binary-mirror-413418 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ │
│ delete │ -p binary-mirror-413418 │ binary-mirror-413418 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:55 UTC │
│ addons │ enable dashboard -p addons-712341 │ addons-712341 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ │
│ addons │ disable dashboard -p addons-712341 │ addons-712341 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ │
│ start │ -p addons-712341 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-712341 │ jenkins │ v1.37.0 │ 09 Dec 25 01:55 UTC │ 09 Dec 25 01:58 UTC │
│ addons │ addons-712341 addons disable volcano --alsologtostderr -v=1 │ addons-712341 │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
│ addons │ addons-712341 addons disable gcp-auth --alsologtostderr -v=1 │ addons-712341 │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
│ addons │ enable headlamp -p addons-712341 --alsologtostderr -v=1 │ addons-712341 │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
│ addons │ addons-712341 addons disable metrics-server --alsologtostderr -v=1 │ addons-712341 │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
│ addons │ addons-712341 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-712341 │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
│ addons │ addons-712341 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-712341 │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
│ ip │ addons-712341 ip │ addons-712341 │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
│ addons │ addons-712341 addons disable registry --alsologtostderr -v=1 │ addons-712341 │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
│ addons │ addons-712341 addons disable headlamp --alsologtostderr -v=1 │ addons-712341 │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
│ ssh │ addons-712341 ssh cat /opt/local-path-provisioner/pvc-5f1d4e27-646c-4ec7-9bd6-c32e7c190c45_default_test-pvc/file1 │ addons-712341 │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
│ addons │ addons-712341 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-712341 │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:59 UTC │
│ addons │ addons-712341 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-712341 │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ 09 Dec 25 01:58 UTC │
│ ssh │ addons-712341 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-712341 │ jenkins │ v1.37.0 │ 09 Dec 25 01:58 UTC │ │
│ addons │ addons-712341 addons disable yakd --alsologtostderr -v=1 │ addons-712341 │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-712341 │ addons-712341 │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
│ addons │ addons-712341 addons disable registry-creds --alsologtostderr -v=1 │ addons-712341 │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
│ addons │ addons-712341 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-712341 │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
│ addons │ addons-712341 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-712341 │ jenkins │ v1.37.0 │ 09 Dec 25 01:59 UTC │ 09 Dec 25 01:59 UTC │
│ ip │ addons-712341 ip │ addons-712341 │ jenkins │ v1.37.0 │ 09 Dec 25 02:01 UTC │ 09 Dec 25 02:01 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/09 01:55:51
Running on machine: ubuntu-20-agent-13
Binary: Built with gc go1.25.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1209 01:55:51.913915 259666 out.go:360] Setting OutFile to fd 1 ...
I1209 01:55:51.914035 259666 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 01:55:51.914042 259666 out.go:374] Setting ErrFile to fd 2...
I1209 01:55:51.914049 259666 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 01:55:51.914237 259666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-254936/.minikube/bin
I1209 01:55:51.914843 259666 out.go:368] Setting JSON to false
I1209 01:55:51.915755 259666 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27502,"bootTime":1765217850,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1209 01:55:51.915834 259666 start.go:143] virtualization: kvm guest
I1209 01:55:51.917867 259666 out.go:179] * [addons-712341] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1209 01:55:51.919297 259666 out.go:179] - MINIKUBE_LOCATION=22081
I1209 01:55:51.919305 259666 notify.go:221] Checking for updates...
I1209 01:55:51.922532 259666 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1209 01:55:51.924042 259666 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22081-254936/kubeconfig
I1209 01:55:51.925428 259666 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-254936/.minikube
I1209 01:55:51.926969 259666 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1209 01:55:51.928424 259666 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1209 01:55:51.929898 259666 driver.go:422] Setting default libvirt URI to qemu:///system
I1209 01:55:51.961030 259666 out.go:179] * Using the kvm2 driver based on user configuration
I1209 01:55:51.962300 259666 start.go:309] selected driver: kvm2
I1209 01:55:51.962315 259666 start.go:927] validating driver "kvm2" against <nil>
I1209 01:55:51.962328 259666 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1209 01:55:51.963041 259666 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1209 01:55:51.963291 259666 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1209 01:55:51.963317 259666 cni.go:84] Creating CNI manager for ""
I1209 01:55:51.963358 259666 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1209 01:55:51.963368 259666 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1209 01:55:51.963408 259666 start.go:353] cluster config:
{Name:addons-712341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-712341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
I1209 01:55:51.963506 259666 iso.go:125] acquiring lock: {Name:mk5e3a22cdf6cd1ed24c9a04adaf1049140c04b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1209 01:55:51.965076 259666 out.go:179] * Starting "addons-712341" primary control-plane node in "addons-712341" cluster
I1209 01:55:51.966686 259666 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1209 01:55:51.966721 259666 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-254936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
I1209 01:55:51.966729 259666 cache.go:65] Caching tarball of preloaded images
I1209 01:55:51.966818 259666 preload.go:238] Found /home/jenkins/minikube-integration/22081-254936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1209 01:55:51.966851 259666 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
I1209 01:55:51.967214 259666 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/config.json ...
I1209 01:55:51.967244 259666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/config.json: {Name:mkbc318e9832bd68097f4bd0339c0ce1fe587cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:55:51.967430 259666 start.go:360] acquireMachinesLock for addons-712341: {Name:mkb4bf4bc2a6ad90b53de9be214957ca6809cd32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1209 01:55:51.967505 259666 start.go:364] duration metric: took 53.333µs to acquireMachinesLock for "addons-712341"
I1209 01:55:51.967530 259666 start.go:93] Provisioning new machine with config: &{Name:addons-712341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-712341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1209 01:55:51.967609 259666 start.go:125] createHost starting for "" (driver="kvm2")
I1209 01:55:51.970193 259666 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1209 01:55:51.970407 259666 start.go:159] libmachine.API.Create for "addons-712341" (driver="kvm2")
I1209 01:55:51.970444 259666 client.go:173] LocalClient.Create starting
I1209 01:55:51.970559 259666 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem
I1209 01:55:52.007577 259666 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/cert.pem
I1209 01:55:52.072361 259666 main.go:143] libmachine: creating domain...
I1209 01:55:52.072386 259666 main.go:143] libmachine: creating network...
I1209 01:55:52.074044 259666 main.go:143] libmachine: found existing default network
I1209 01:55:52.074296 259666 main.go:143] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1209 01:55:52.074887 259666 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d3e060}
I1209 01:55:52.075028 259666 main.go:143] libmachine: defining private network:
<network>
<name>mk-addons-712341</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1209 01:55:52.081172 259666 main.go:143] libmachine: creating private network mk-addons-712341 192.168.39.0/24...
I1209 01:55:52.157083 259666 main.go:143] libmachine: private network mk-addons-712341 192.168.39.0/24 created
I1209 01:55:52.157424 259666 main.go:143] libmachine: <network>
<name>mk-addons-712341</name>
<uuid>0556de88-06ab-485d-9c24-8217acb00de5</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:ce:5b:fb'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1209 01:55:52.157458 259666 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341 ...
I1209 01:55:52.157492 259666 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22081-254936/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso
I1209 01:55:52.157507 259666 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22081-254936/.minikube
I1209 01:55:52.157593 259666 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22081-254936/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22081-254936/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso...
I1209 01:55:52.421048 259666 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa...
I1209 01:55:52.570398 259666 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/addons-712341.rawdisk...
I1209 01:55:52.570456 259666 main.go:143] libmachine: Writing magic tar header
I1209 01:55:52.570484 259666 main.go:143] libmachine: Writing SSH key tar header
I1209 01:55:52.570557 259666 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341 ...
I1209 01:55:52.570616 259666 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341
I1209 01:55:52.570655 259666 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341 (perms=drwx------)
I1209 01:55:52.570669 259666 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22081-254936/.minikube/machines
I1209 01:55:52.570679 259666 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22081-254936/.minikube/machines (perms=drwxr-xr-x)
I1209 01:55:52.570687 259666 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22081-254936/.minikube
I1209 01:55:52.570699 259666 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22081-254936/.minikube (perms=drwxr-xr-x)
I1209 01:55:52.570710 259666 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22081-254936
I1209 01:55:52.570725 259666 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22081-254936 (perms=drwxrwxr-x)
I1209 01:55:52.570735 259666 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1209 01:55:52.570742 259666 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1209 01:55:52.570753 259666 main.go:143] libmachine: checking permissions on dir: /home/jenkins
I1209 01:55:52.570760 259666 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1209 01:55:52.570771 259666 main.go:143] libmachine: checking permissions on dir: /home
I1209 01:55:52.570778 259666 main.go:143] libmachine: skipping /home - not owner
I1209 01:55:52.570782 259666 main.go:143] libmachine: defining domain...
I1209 01:55:52.572359 259666 main.go:143] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-712341</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/addons-712341.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-712341'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1209 01:55:52.577681 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:07:49:0d in network default
I1209 01:55:52.578359 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:55:52.578380 259666 main.go:143] libmachine: starting domain...
I1209 01:55:52.578385 259666 main.go:143] libmachine: ensuring networks are active...
I1209 01:55:52.579326 259666 main.go:143] libmachine: Ensuring network default is active
I1209 01:55:52.579778 259666 main.go:143] libmachine: Ensuring network mk-addons-712341 is active
I1209 01:55:52.580598 259666 main.go:143] libmachine: getting domain XML...
I1209 01:55:52.581902 259666 main.go:143] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-712341</name>
<uuid>870ec28c-5b88-46bc-b908-87091429a736</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/addons-712341.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:c8:8f:0e'/>
<source network='mk-addons-712341'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:07:49:0d'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1209 01:55:53.877327 259666 main.go:143] libmachine: waiting for domain to start...
I1209 01:55:53.878860 259666 main.go:143] libmachine: domain is now running
I1209 01:55:53.878883 259666 main.go:143] libmachine: waiting for IP...
I1209 01:55:53.879804 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:55:53.880424 259666 main.go:143] libmachine: no network interface addresses found for domain addons-712341 (source=lease)
I1209 01:55:53.880457 259666 main.go:143] libmachine: trying to list again with source=arp
I1209 01:55:53.880867 259666 main.go:143] libmachine: unable to find current IP address of domain addons-712341 in network mk-addons-712341 (interfaces detected: [])
I1209 01:55:53.880926 259666 retry.go:31] will retry after 265.397085ms: waiting for domain to come up
I1209 01:55:54.148713 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:55:54.149545 259666 main.go:143] libmachine: no network interface addresses found for domain addons-712341 (source=lease)
I1209 01:55:54.149566 259666 main.go:143] libmachine: trying to list again with source=arp
I1209 01:55:54.149932 259666 main.go:143] libmachine: unable to find current IP address of domain addons-712341 in network mk-addons-712341 (interfaces detected: [])
I1209 01:55:54.150001 259666 retry.go:31] will retry after 307.385775ms: waiting for domain to come up
I1209 01:55:54.458653 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:55:54.459509 259666 main.go:143] libmachine: no network interface addresses found for domain addons-712341 (source=lease)
I1209 01:55:54.459528 259666 main.go:143] libmachine: trying to list again with source=arp
I1209 01:55:54.460047 259666 main.go:143] libmachine: unable to find current IP address of domain addons-712341 in network mk-addons-712341 (interfaces detected: [])
I1209 01:55:54.460093 259666 retry.go:31] will retry after 395.041534ms: waiting for domain to come up
I1209 01:55:54.856811 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:55:54.857628 259666 main.go:143] libmachine: no network interface addresses found for domain addons-712341 (source=lease)
I1209 01:55:54.857646 259666 main.go:143] libmachine: trying to list again with source=arp
I1209 01:55:54.858038 259666 main.go:143] libmachine: unable to find current IP address of domain addons-712341 in network mk-addons-712341 (interfaces detected: [])
I1209 01:55:54.858082 259666 retry.go:31] will retry after 374.275906ms: waiting for domain to come up
I1209 01:55:55.233758 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:55:55.234551 259666 main.go:143] libmachine: no network interface addresses found for domain addons-712341 (source=lease)
I1209 01:55:55.234570 259666 main.go:143] libmachine: trying to list again with source=arp
I1209 01:55:55.234982 259666 main.go:143] libmachine: unable to find current IP address of domain addons-712341 in network mk-addons-712341 (interfaces detected: [])
I1209 01:55:55.235028 259666 retry.go:31] will retry after 747.649275ms: waiting for domain to come up
I1209 01:55:55.984035 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:55:55.984743 259666 main.go:143] libmachine: no network interface addresses found for domain addons-712341 (source=lease)
I1209 01:55:55.984755 259666 main.go:143] libmachine: trying to list again with source=arp
I1209 01:55:55.985073 259666 main.go:143] libmachine: unable to find current IP address of domain addons-712341 in network mk-addons-712341 (interfaces detected: [])
I1209 01:55:55.985109 259666 retry.go:31] will retry after 865.91237ms: waiting for domain to come up
I1209 01:55:56.852567 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:55:56.853208 259666 main.go:143] libmachine: no network interface addresses found for domain addons-712341 (source=lease)
I1209 01:55:56.853229 259666 main.go:143] libmachine: trying to list again with source=arp
I1209 01:55:56.853581 259666 main.go:143] libmachine: unable to find current IP address of domain addons-712341 in network mk-addons-712341 (interfaces detected: [])
I1209 01:55:56.853621 259666 retry.go:31] will retry after 1.052488212s: waiting for domain to come up
I1209 01:55:57.908017 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:55:57.908872 259666 main.go:143] libmachine: no network interface addresses found for domain addons-712341 (source=lease)
I1209 01:55:57.908903 259666 main.go:143] libmachine: trying to list again with source=arp
I1209 01:55:57.909276 259666 main.go:143] libmachine: unable to find current IP address of domain addons-712341 in network mk-addons-712341 (interfaces detected: [])
I1209 01:55:57.909322 259666 retry.go:31] will retry after 1.187266906s: waiting for domain to come up
I1209 01:55:59.098780 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:55:59.099456 259666 main.go:143] libmachine: no network interface addresses found for domain addons-712341 (source=lease)
I1209 01:55:59.099474 259666 main.go:143] libmachine: trying to list again with source=arp
I1209 01:55:59.099856 259666 main.go:143] libmachine: unable to find current IP address of domain addons-712341 in network mk-addons-712341 (interfaces detected: [])
I1209 01:55:59.099900 259666 retry.go:31] will retry after 1.462600886s: waiting for domain to come up
I1209 01:56:00.564917 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:00.565697 259666 main.go:143] libmachine: no network interface addresses found for domain addons-712341 (source=lease)
I1209 01:56:00.565718 259666 main.go:143] libmachine: trying to list again with source=arp
I1209 01:56:00.566186 259666 main.go:143] libmachine: unable to find current IP address of domain addons-712341 in network mk-addons-712341 (interfaces detected: [])
I1209 01:56:00.566236 259666 retry.go:31] will retry after 1.786857993s: waiting for domain to come up
I1209 01:56:02.355216 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:02.356156 259666 main.go:143] libmachine: no network interface addresses found for domain addons-712341 (source=lease)
I1209 01:56:02.356186 259666 main.go:143] libmachine: trying to list again with source=arp
I1209 01:56:02.356607 259666 main.go:143] libmachine: unable to find current IP address of domain addons-712341 in network mk-addons-712341 (interfaces detected: [])
I1209 01:56:02.356674 259666 retry.go:31] will retry after 2.31997202s: waiting for domain to come up
I1209 01:56:04.678970 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:04.679666 259666 main.go:143] libmachine: no network interface addresses found for domain addons-712341 (source=lease)
I1209 01:56:04.679684 259666 main.go:143] libmachine: trying to list again with source=arp
I1209 01:56:04.680272 259666 main.go:143] libmachine: unable to find current IP address of domain addons-712341 in network mk-addons-712341 (interfaces detected: [])
I1209 01:56:04.680321 259666 retry.go:31] will retry after 3.342048068s: waiting for domain to come up
I1209 01:56:08.024041 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:08.024748 259666 main.go:143] libmachine: no network interface addresses found for domain addons-712341 (source=lease)
I1209 01:56:08.024764 259666 main.go:143] libmachine: trying to list again with source=arp
I1209 01:56:08.025270 259666 main.go:143] libmachine: unable to find current IP address of domain addons-712341 in network mk-addons-712341 (interfaces detected: [])
I1209 01:56:08.025321 259666 retry.go:31] will retry after 4.37421634s: waiting for domain to come up
I1209 01:56:12.400710 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:12.401456 259666 main.go:143] libmachine: domain addons-712341 has current primary IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:12.401476 259666 main.go:143] libmachine: found domain IP: 192.168.39.107
I1209 01:56:12.401485 259666 main.go:143] libmachine: reserving static IP address...
I1209 01:56:12.401874 259666 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-712341", mac: "52:54:00:c8:8f:0e", ip: "192.168.39.107"} in network mk-addons-712341
I1209 01:56:12.607096 259666 main.go:143] libmachine: reserved static IP address 192.168.39.107 for domain addons-712341
I1209 01:56:12.607121 259666 main.go:143] libmachine: waiting for SSH...
I1209 01:56:12.607140 259666 main.go:143] libmachine: Getting to WaitForSSH function...
I1209 01:56:12.610473 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:12.611080 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:12.611125 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:12.611385 259666 main.go:143] libmachine: Using SSH client type: native
I1209 01:56:12.611719 259666 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil> [] 0s} 192.168.39.107 22 <nil> <nil>}
I1209 01:56:12.611734 259666 main.go:143] libmachine: About to run SSH command:
exit 0
I1209 01:56:12.744903 259666 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1209 01:56:12.745324 259666 main.go:143] libmachine: domain creation complete
I1209 01:56:12.746730 259666 machine.go:94] provisionDockerMachine start ...
I1209 01:56:12.749465 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:12.749882 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:12.749908 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:12.750143 259666 main.go:143] libmachine: Using SSH client type: native
I1209 01:56:12.750389 259666 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil> [] 0s} 192.168.39.107 22 <nil> <nil>}
I1209 01:56:12.750402 259666 main.go:143] libmachine: About to run SSH command:
hostname
I1209 01:56:12.871555 259666 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
I1209 01:56:12.871592 259666 buildroot.go:166] provisioning hostname "addons-712341"
I1209 01:56:12.874844 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:12.875400 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:12.875435 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:12.875691 259666 main.go:143] libmachine: Using SSH client type: native
I1209 01:56:12.875907 259666 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil> [] 0s} 192.168.39.107 22 <nil> <nil>}
I1209 01:56:12.875921 259666 main.go:143] libmachine: About to run SSH command:
sudo hostname addons-712341 && echo "addons-712341" | sudo tee /etc/hostname
I1209 01:56:13.015342 259666 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-712341
I1209 01:56:13.018668 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:13.019132 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:13.019166 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:13.019363 259666 main.go:143] libmachine: Using SSH client type: native
I1209 01:56:13.019625 259666 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil> [] 0s} 192.168.39.107 22 <nil> <nil>}
I1209 01:56:13.019642 259666 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-712341' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-712341/g' /etc/hosts;
else
echo '127.0.1.1 addons-712341' | sudo tee -a /etc/hosts;
fi
fi
I1209 01:56:13.150128 259666 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1209 01:56:13.150174 259666 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22081-254936/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-254936/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-254936/.minikube}
I1209 01:56:13.150238 259666 buildroot.go:174] setting up certificates
I1209 01:56:13.150249 259666 provision.go:84] configureAuth start
I1209 01:56:13.153162 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:13.153669 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:13.153697 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:13.156274 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:13.156657 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:13.156679 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:13.156810 259666 provision.go:143] copyHostCerts
I1209 01:56:13.156932 259666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-254936/.minikube/ca.pem (1078 bytes)
I1209 01:56:13.157133 259666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-254936/.minikube/cert.pem (1123 bytes)
I1209 01:56:13.157239 259666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-254936/.minikube/key.pem (1679 bytes)
I1209 01:56:13.157331 259666 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-254936/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca-key.pem org=jenkins.addons-712341 san=[127.0.0.1 192.168.39.107 addons-712341 localhost minikube]
I1209 01:56:13.302563 259666 provision.go:177] copyRemoteCerts
I1209 01:56:13.302629 259666 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1209 01:56:13.305577 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:13.306131 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:13.306164 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:13.306378 259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
I1209 01:56:13.399103 259666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1209 01:56:13.432081 259666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1209 01:56:13.466622 259666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1209 01:56:13.498405 259666 provision.go:87] duration metric: took 348.137312ms to configureAuth
I1209 01:56:13.498438 259666 buildroot.go:189] setting minikube options for container-runtime
I1209 01:56:13.498635 259666 config.go:182] Loaded profile config "addons-712341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 01:56:13.502099 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:13.502553 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:13.502581 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:13.502878 259666 main.go:143] libmachine: Using SSH client type: native
I1209 01:56:13.503105 259666 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil> [] 0s} 192.168.39.107 22 <nil> <nil>}
I1209 01:56:13.503123 259666 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1209 01:56:13.938598 259666 main.go:143] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1209 01:56:13.938629 259666 machine.go:97] duration metric: took 1.191878481s to provisionDockerMachine
I1209 01:56:13.938642 259666 client.go:176] duration metric: took 21.968186831s to LocalClient.Create
I1209 01:56:13.938697 259666 start.go:167] duration metric: took 21.968265519s to libmachine.API.Create "addons-712341"
I1209 01:56:13.938710 259666 start.go:293] postStartSetup for "addons-712341" (driver="kvm2")
I1209 01:56:13.938723 259666 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1209 01:56:13.938814 259666 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1209 01:56:13.942173 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:13.942615 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:13.942638 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:13.942785 259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
I1209 01:56:14.035480 259666 ssh_runner.go:195] Run: cat /etc/os-release
I1209 01:56:14.041119 259666 info.go:137] Remote host: Buildroot 2025.02
I1209 01:56:14.041165 259666 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-254936/.minikube/addons for local assets ...
I1209 01:56:14.041259 259666 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-254936/.minikube/files for local assets ...
I1209 01:56:14.041296 259666 start.go:296] duration metric: took 102.577833ms for postStartSetup
I1209 01:56:14.079987 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:14.080490 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:14.080525 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:14.080837 259666 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/config.json ...
I1209 01:56:14.081108 259666 start.go:128] duration metric: took 22.113482831s to createHost
I1209 01:56:14.083545 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:14.084053 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:14.084082 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:14.084311 259666 main.go:143] libmachine: Using SSH client type: native
I1209 01:56:14.084523 259666 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d740] 0x8503e0 <nil> [] 0s} 192.168.39.107 22 <nil> <nil>}
I1209 01:56:14.084533 259666 main.go:143] libmachine: About to run SSH command:
date +%s.%N
I1209 01:56:14.206933 259666 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765245374.173473349
I1209 01:56:14.206983 259666 fix.go:216] guest clock: 1765245374.173473349
I1209 01:56:14.206992 259666 fix.go:229] Guest: 2025-12-09 01:56:14.173473349 +0000 UTC Remote: 2025-12-09 01:56:14.081142247 +0000 UTC m=+22.219468127 (delta=92.331102ms)
I1209 01:56:14.207010 259666 fix.go:200] guest clock delta is within tolerance: 92.331102ms
I1209 01:56:14.207016 259666 start.go:83] releasing machines lock for "addons-712341", held for 22.239498517s
I1209 01:56:14.210153 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:14.210598 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:14.210626 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:14.211219 259666 ssh_runner.go:195] Run: cat /version.json
I1209 01:56:14.211304 259666 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1209 01:56:14.214631 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:14.215099 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:14.215100 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:14.215160 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:14.215375 259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
I1209 01:56:14.215685 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:14.215718 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:14.215908 259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
I1209 01:56:14.324217 259666 ssh_runner.go:195] Run: systemctl --version
I1209 01:56:14.331424 259666 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1209 01:56:14.776975 259666 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1209 01:56:14.786222 259666 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1209 01:56:14.786312 259666 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1209 01:56:14.810312 259666 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1209 01:56:14.810358 259666 start.go:496] detecting cgroup driver to use...
I1209 01:56:14.810945 259666 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1209 01:56:14.834532 259666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1209 01:56:14.855018 259666 docker.go:218] disabling cri-docker service (if available) ...
I1209 01:56:14.855097 259666 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1209 01:56:14.873689 259666 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1209 01:56:14.892588 259666 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1209 01:56:15.050682 259666 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1209 01:56:15.201515 259666 docker.go:234] disabling docker service ...
I1209 01:56:15.201600 259666 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1209 01:56:15.219273 259666 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1209 01:56:15.236589 259666 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1209 01:56:15.461375 259666 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1209 01:56:15.606689 259666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1209 01:56:15.623886 259666 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1209 01:56:15.649306 259666 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1209 01:56:15.649375 259666 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1209 01:56:15.662394 259666 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1209 01:56:15.662493 259666 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1209 01:56:15.675986 259666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1209 01:56:15.689401 259666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1209 01:56:15.702735 259666 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1209 01:56:15.718200 259666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1209 01:56:15.731978 259666 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1209 01:56:15.756449 259666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1209 01:56:15.770276 259666 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1209 01:56:15.782285 259666 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1209 01:56:15.782357 259666 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1209 01:56:15.804234 259666 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1209 01:56:15.817233 259666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1209 01:56:15.962723 259666 ssh_runner.go:195] Run: sudo systemctl restart crio
I1209 01:56:16.084810 259666 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1209 01:56:16.084937 259666 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1209 01:56:16.090926 259666 start.go:564] Will wait 60s for crictl version
I1209 01:56:16.091023 259666 ssh_runner.go:195] Run: which crictl
I1209 01:56:16.095927 259666 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1209 01:56:16.136298 259666 start.go:580] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1209 01:56:16.136400 259666 ssh_runner.go:195] Run: crio --version
I1209 01:56:16.169730 259666 ssh_runner.go:195] Run: crio --version
I1209 01:56:16.204330 259666 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
I1209 01:56:16.208592 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:16.209048 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:16.209074 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:16.209342 259666 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1209 01:56:16.214627 259666 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1209 01:56:16.230641 259666 kubeadm.go:884] updating cluster {Name:addons-712341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-712341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1209 01:56:16.230774 259666 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1209 01:56:16.230844 259666 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 01:56:16.263569 259666 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
I1209 01:56:16.263646 259666 ssh_runner.go:195] Run: which lz4
I1209 01:56:16.268308 259666 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1209 01:56:16.273636 259666 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1209 01:56:16.273675 259666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
I1209 01:56:17.705360 259666 crio.go:462] duration metric: took 1.43708175s to copy over tarball
I1209 01:56:17.705457 259666 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1209 01:56:19.113008 259666 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.407507771s)
I1209 01:56:19.113035 259666 crio.go:469] duration metric: took 1.407642549s to extract the tarball
I1209 01:56:19.113043 259666 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1209 01:56:19.150009 259666 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 01:56:19.191699 259666 crio.go:514] all images are preloaded for cri-o runtime.
I1209 01:56:19.191722 259666 cache_images.go:86] Images are preloaded, skipping loading
I1209 01:56:19.191731 259666 kubeadm.go:935] updating node { 192.168.39.107 8443 v1.34.2 crio true true} ...
I1209 01:56:19.191895 259666 kubeadm.go:947] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-712341 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.107
[Install]
config:
{KubernetesVersion:v1.34.2 ClusterName:addons-712341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1209 01:56:19.192020 259666 ssh_runner.go:195] Run: crio config
I1209 01:56:19.240095 259666 cni.go:84] Creating CNI manager for ""
I1209 01:56:19.240118 259666 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1209 01:56:19.240141 259666 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1209 01:56:19.240169 259666 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.107 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-712341 NodeName:addons-712341 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1209 01:56:19.240343 259666 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.107
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-712341"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.107"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.107"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1209 01:56:19.240426 259666 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
I1209 01:56:19.253940 259666 binaries.go:51] Found k8s binaries, skipping transfer
I1209 01:56:19.254021 259666 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1209 01:56:19.266741 259666 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
I1209 01:56:19.288805 259666 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1209 01:56:19.311434 259666 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
I1209 01:56:19.334386 259666 ssh_runner.go:195] Run: grep 192.168.39.107 control-plane.minikube.internal$ /etc/hosts
I1209 01:56:19.339285 259666 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.107 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1209 01:56:19.355563 259666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1209 01:56:19.505046 259666 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1209 01:56:19.536671 259666 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341 for IP: 192.168.39.107
I1209 01:56:19.536706 259666 certs.go:195] generating shared ca certs ...
I1209 01:56:19.536731 259666 certs.go:227] acquiring lock for ca certs: {Name:mk538e8c05758246ce904354c7e7ace78887d181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:56:19.536988 259666 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-254936/.minikube/ca.key
I1209 01:56:19.588349 259666 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-254936/.minikube/ca.crt ...
I1209 01:56:19.588384 259666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/.minikube/ca.crt: {Name:mk25984b3e32ec9734e4cda7734262a1d8004f76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:56:19.588566 259666 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-254936/.minikube/ca.key ...
I1209 01:56:19.588578 259666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/.minikube/ca.key: {Name:mkdb18c3362861140a9d6339271fb0245c707c4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:56:19.588653 259666 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-254936/.minikube/proxy-client-ca.key
I1209 01:56:19.616688 259666 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-254936/.minikube/proxy-client-ca.crt ...
I1209 01:56:19.616718 259666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/.minikube/proxy-client-ca.crt: {Name:mkf201d94ce9a38ac3d2e3ba9845b3ebc459b0cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:56:19.616892 259666 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-254936/.minikube/proxy-client-ca.key ...
I1209 01:56:19.616905 259666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/.minikube/proxy-client-ca.key: {Name:mk4c6834e6ed7ee10958c4e629376a30863c157f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:56:19.616974 259666 certs.go:257] generating profile certs ...
I1209 01:56:19.617046 259666 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.key
I1209 01:56:19.617061 259666 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt with IP's: []
I1209 01:56:19.675363 259666 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt ...
I1209 01:56:19.675392 259666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.crt: {Name:mk2c6d9f6571abe7785206344ce34d3204c868fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:56:19.675553 259666 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.key ...
I1209 01:56:19.675564 259666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/client.key: {Name:mk7400720eb975505d75ffc51097ac8ebc198c37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:56:19.675646 259666 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/apiserver.key.ed6545a3
I1209 01:56:19.675667 259666 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/apiserver.crt.ed6545a3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.107]
I1209 01:56:19.794358 259666 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/apiserver.crt.ed6545a3 ...
I1209 01:56:19.794387 259666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/apiserver.crt.ed6545a3: {Name:mkea62fdcc90205c9f4d045336442f3cf6198861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:56:19.794553 259666 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/apiserver.key.ed6545a3 ...
I1209 01:56:19.794566 259666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/apiserver.key.ed6545a3: {Name:mk47cad7e2e670aeb0c3d5eabd691889e41b7c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:56:19.794644 259666 certs.go:382] copying /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/apiserver.crt.ed6545a3 -> /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/apiserver.crt
I1209 01:56:19.794713 259666 certs.go:386] copying /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/apiserver.key.ed6545a3 -> /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/apiserver.key
I1209 01:56:19.794760 259666 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/proxy-client.key
I1209 01:56:19.794777 259666 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/proxy-client.crt with IP's: []
I1209 01:56:19.883801 259666 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/proxy-client.crt ...
I1209 01:56:19.883842 259666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/proxy-client.crt: {Name:mk1385aa835cc65b91e728b4ed5b58a37ba1a4d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:56:19.884009 259666 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/proxy-client.key ...
I1209 01:56:19.884023 259666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/proxy-client.key: {Name:mk07fbac3e70b8b1b55759f366cd54064f477753 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:56:19.884203 259666 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca-key.pem (1679 bytes)
I1209 01:56:19.884241 259666 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/ca.pem (1078 bytes)
I1209 01:56:19.884270 259666 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/cert.pem (1123 bytes)
I1209 01:56:19.884296 259666 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-254936/.minikube/certs/key.pem (1679 bytes)
I1209 01:56:19.884926 259666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1209 01:56:19.918215 259666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1209 01:56:19.949658 259666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1209 01:56:19.981549 259666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1209 01:56:20.013363 259666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1209 01:56:20.046556 259666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1209 01:56:20.080767 259666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1209 01:56:20.113752 259666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/profiles/addons-712341/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1209 01:56:20.146090 259666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-254936/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1209 01:56:20.179542 259666 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1209 01:56:20.202726 259666 ssh_runner.go:195] Run: openssl version
I1209 01:56:20.209978 259666 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1209 01:56:20.228445 259666 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1209 01:56:20.242414 259666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1209 01:56:20.249302 259666 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 9 01:56 /usr/share/ca-certificates/minikubeCA.pem
I1209 01:56:20.249372 259666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1209 01:56:20.260886 259666 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1209 01:56:20.277482 259666 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1209 01:56:20.293408 259666 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1209 01:56:20.299356 259666 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1209 01:56:20.299432 259666 kubeadm.go:401] StartCluster: {Name:addons-712341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-712341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1209 01:56:20.299521 259666 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1209 01:56:20.299577 259666 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1209 01:56:20.338305 259666 cri.go:89] found id: ""
I1209 01:56:20.338383 259666 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1209 01:56:20.352247 259666 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1209 01:56:20.365865 259666 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1209 01:56:20.379255 259666 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1209 01:56:20.379277 259666 kubeadm.go:158] found existing configuration files:
I1209 01:56:20.379342 259666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1209 01:56:20.394007 259666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1209 01:56:20.394071 259666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1209 01:56:20.406959 259666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1209 01:56:20.418745 259666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1209 01:56:20.418817 259666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1209 01:56:20.432103 259666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1209 01:56:20.444308 259666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1209 01:56:20.444371 259666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1209 01:56:20.457064 259666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1209 01:56:20.469264 259666 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1209 01:56:20.469328 259666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1209 01:56:20.482738 259666 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1209 01:56:20.539264 259666 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
I1209 01:56:20.539333 259666 kubeadm.go:319] [preflight] Running pre-flight checks
I1209 01:56:20.651956 259666 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1209 01:56:20.652108 259666 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1209 01:56:20.652228 259666 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1209 01:56:20.663439 259666 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1209 01:56:20.705449 259666 out.go:252] - Generating certificates and keys ...
I1209 01:56:20.705587 259666 kubeadm.go:319] [certs] Using existing ca certificate authority
I1209 01:56:20.705702 259666 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1209 01:56:20.731893 259666 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1209 01:56:21.123152 259666 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1209 01:56:21.435961 259666 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1209 01:56:22.056052 259666 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1209 01:56:22.330258 259666 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1209 01:56:22.330731 259666 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-712341 localhost] and IPs [192.168.39.107 127.0.0.1 ::1]
I1209 01:56:22.534479 259666 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1209 01:56:22.535545 259666 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-712341 localhost] and IPs [192.168.39.107 127.0.0.1 ::1]
I1209 01:56:22.839733 259666 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1209 01:56:23.345878 259666 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1209 01:56:23.492556 259666 kubeadm.go:319] [certs] Generating "sa" key and public key
I1209 01:56:23.492627 259666 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1209 01:56:23.808202 259666 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1209 01:56:24.210780 259666 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1209 01:56:24.519003 259666 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1209 01:56:24.731456 259666 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1209 01:56:25.386737 259666 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1209 01:56:25.389328 259666 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1209 01:56:25.392681 259666 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1209 01:56:25.395117 259666 out.go:252] - Booting up control plane ...
I1209 01:56:25.395242 259666 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1209 01:56:25.395330 259666 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1209 01:56:25.395405 259666 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1209 01:56:25.413924 259666 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1209 01:56:25.414050 259666 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1209 01:56:25.423352 259666 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1209 01:56:25.423498 259666 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1209 01:56:25.423566 259666 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1209 01:56:25.601588 259666 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1209 01:56:25.601741 259666 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1209 01:56:27.102573 259666 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501728248s
I1209 01:56:27.106697 259666 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1209 01:56:27.106805 259666 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.107:8443/livez
I1209 01:56:27.106916 259666 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1209 01:56:27.107043 259666 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1209 01:56:30.705714 259666 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.599747652s
I1209 01:56:31.358423 259666 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.252497162s
I1209 01:56:33.106263 259666 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001710585s
I1209 01:56:33.128662 259666 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1209 01:56:33.144350 259666 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1209 01:56:33.170578 259666 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1209 01:56:33.172164 259666 kubeadm.go:319] [mark-control-plane] Marking the node addons-712341 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1209 01:56:33.191436 259666 kubeadm.go:319] [bootstrap-token] Using token: 7em9fe.8onfni9y9x6y6345
I1209 01:56:33.192896 259666 out.go:252] - Configuring RBAC rules ...
I1209 01:56:33.193068 259666 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1209 01:56:33.199789 259666 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1209 01:56:33.210993 259666 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1209 01:56:33.215851 259666 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1209 01:56:33.220512 259666 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1209 01:56:33.224892 259666 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1209 01:56:33.512967 259666 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1209 01:56:33.964307 259666 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1209 01:56:34.514479 259666 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1209 01:56:34.514515 259666 kubeadm.go:319]
I1209 01:56:34.514600 259666 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1209 01:56:34.514614 259666 kubeadm.go:319]
I1209 01:56:34.514734 259666 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1209 01:56:34.514750 259666 kubeadm.go:319]
I1209 01:56:34.514785 259666 kubeadm.go:319] mkdir -p $HOME/.kube
I1209 01:56:34.514916 259666 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1209 01:56:34.514993 259666 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1209 01:56:34.515006 259666 kubeadm.go:319]
I1209 01:56:34.515081 259666 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1209 01:56:34.515093 259666 kubeadm.go:319]
I1209 01:56:34.515176 259666 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1209 01:56:34.515188 259666 kubeadm.go:319]
I1209 01:56:34.515266 259666 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1209 01:56:34.515373 259666 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1209 01:56:34.515481 259666 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1209 01:56:34.515492 259666 kubeadm.go:319]
I1209 01:56:34.515633 259666 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1209 01:56:34.515752 259666 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1209 01:56:34.515760 259666 kubeadm.go:319]
I1209 01:56:34.515878 259666 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 7em9fe.8onfni9y9x6y6345 \
I1209 01:56:34.516049 259666 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:0be7e0a7baa75d08b526e8b854bf3b813e93f67dd991ef9945e4881192856bde \
I1209 01:56:34.516087 259666 kubeadm.go:319] --control-plane
I1209 01:56:34.516096 259666 kubeadm.go:319]
I1209 01:56:34.516232 259666 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1209 01:56:34.516257 259666 kubeadm.go:319]
I1209 01:56:34.516386 259666 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 7em9fe.8onfni9y9x6y6345 \
I1209 01:56:34.516533 259666 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:0be7e0a7baa75d08b526e8b854bf3b813e93f67dd991ef9945e4881192856bde
I1209 01:56:34.517919 259666 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1209 01:56:34.517955 259666 cni.go:84] Creating CNI manager for ""
I1209 01:56:34.517966 259666 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1209 01:56:34.519911 259666 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1209 01:56:34.521458 259666 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1209 01:56:34.535811 259666 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1209 01:56:34.571626 259666 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1209 01:56:34.571716 259666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-712341 minikube.k8s.io/updated_at=2025_12_09T01_56_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d minikube.k8s.io/name=addons-712341 minikube.k8s.io/primary=true
I1209 01:56:34.571716 259666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1209 01:56:34.737094 259666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1209 01:56:34.778269 259666 ops.go:34] apiserver oom_adj: -16
I1209 01:56:35.237350 259666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1209 01:56:35.737733 259666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1209 01:56:36.237949 259666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1209 01:56:36.737187 259666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1209 01:56:37.237296 259666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1209 01:56:37.737971 259666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1209 01:56:38.237362 259666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1209 01:56:38.737165 259666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1209 01:56:39.237194 259666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1209 01:56:39.737115 259666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1209 01:56:39.843309 259666 kubeadm.go:1114] duration metric: took 5.271690903s to wait for elevateKubeSystemPrivileges
I1209 01:56:39.843359 259666 kubeadm.go:403] duration metric: took 19.543933591s to StartCluster
I1209 01:56:39.843385 259666 settings.go:142] acquiring lock: {Name:mkec34d0133156567c6c6050ab2f8de3f197c63b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:56:39.843542 259666 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22081-254936/kubeconfig
I1209 01:56:39.844035 259666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-254936/kubeconfig: {Name:mkaafbe94dbea876978b17d37022d815642e1aad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 01:56:39.844312 259666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1209 01:56:39.844306 259666 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1209 01:56:39.844339 259666 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1209 01:56:39.844460 259666 addons.go:70] Setting yakd=true in profile "addons-712341"
I1209 01:56:39.844469 259666 addons.go:70] Setting inspektor-gadget=true in profile "addons-712341"
I1209 01:56:39.844487 259666 addons.go:239] Setting addon inspektor-gadget=true in "addons-712341"
I1209 01:56:39.844489 259666 addons.go:70] Setting registry-creds=true in profile "addons-712341"
I1209 01:56:39.844495 259666 addons.go:70] Setting storage-provisioner=true in profile "addons-712341"
I1209 01:56:39.844506 259666 addons.go:239] Setting addon storage-provisioner=true in "addons-712341"
I1209 01:56:39.844523 259666 addons.go:239] Setting addon registry-creds=true in "addons-712341"
I1209 01:56:39.844532 259666 host.go:66] Checking if "addons-712341" exists ...
I1209 01:56:39.844528 259666 addons.go:70] Setting volcano=true in profile "addons-712341"
I1209 01:56:39.844543 259666 addons.go:70] Setting volumesnapshots=true in profile "addons-712341"
I1209 01:56:39.844553 259666 addons.go:239] Setting addon volcano=true in "addons-712341"
I1209 01:56:39.844553 259666 addons.go:239] Setting addon volumesnapshots=true in "addons-712341"
I1209 01:56:39.844559 259666 host.go:66] Checking if "addons-712341" exists ...
I1209 01:56:39.844573 259666 host.go:66] Checking if "addons-712341" exists ...
I1209 01:56:39.844582 259666 host.go:66] Checking if "addons-712341" exists ...
I1209 01:56:39.844596 259666 config.go:182] Loaded profile config "addons-712341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 01:56:39.844657 259666 addons.go:70] Setting default-storageclass=true in profile "addons-712341"
I1209 01:56:39.844689 259666 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-712341"
I1209 01:56:39.844812 259666 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-712341"
I1209 01:56:39.844863 259666 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-712341"
I1209 01:56:39.844892 259666 host.go:66] Checking if "addons-712341" exists ...
I1209 01:56:39.844962 259666 addons.go:70] Setting ingress=true in profile "addons-712341"
I1209 01:56:39.844994 259666 addons.go:239] Setting addon ingress=true in "addons-712341"
I1209 01:56:39.845044 259666 host.go:66] Checking if "addons-712341" exists ...
I1209 01:56:39.845723 259666 addons.go:70] Setting registry=true in profile "addons-712341"
I1209 01:56:39.845754 259666 addons.go:239] Setting addon registry=true in "addons-712341"
I1209 01:56:39.845783 259666 host.go:66] Checking if "addons-712341" exists ...
I1209 01:56:39.845851 259666 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-712341"
I1209 01:56:39.844532 259666 host.go:66] Checking if "addons-712341" exists ...
I1209 01:56:39.845881 259666 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-712341"
I1209 01:56:39.845908 259666 host.go:66] Checking if "addons-712341" exists ...
I1209 01:56:39.845993 259666 addons.go:70] Setting gcp-auth=true in profile "addons-712341"
I1209 01:56:39.846014 259666 mustload.go:66] Loading cluster: addons-712341
I1209 01:56:39.846186 259666 config.go:182] Loaded profile config "addons-712341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1209 01:56:39.846244 259666 addons.go:70] Setting ingress-dns=true in profile "addons-712341"
I1209 01:56:39.846269 259666 addons.go:239] Setting addon ingress-dns=true in "addons-712341"
I1209 01:56:39.846308 259666 host.go:66] Checking if "addons-712341" exists ...
I1209 01:56:39.846470 259666 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-712341"
I1209 01:56:39.846654 259666 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-712341"
I1209 01:56:39.846501 259666 addons.go:70] Setting metrics-server=true in profile "addons-712341"
I1209 01:56:39.846867 259666 addons.go:239] Setting addon metrics-server=true in "addons-712341"
I1209 01:56:39.846898 259666 host.go:66] Checking if "addons-712341" exists ...
I1209 01:56:39.846522 259666 addons.go:70] Setting cloud-spanner=true in profile "addons-712341"
I1209 01:56:39.847153 259666 out.go:179] * Verifying Kubernetes components...
I1209 01:56:39.847166 259666 addons.go:239] Setting addon cloud-spanner=true in "addons-712341"
I1209 01:56:39.847249 259666 host.go:66] Checking if "addons-712341" exists ...
I1209 01:56:39.844487 259666 addons.go:239] Setting addon yakd=true in "addons-712341"
I1209 01:56:39.847501 259666 host.go:66] Checking if "addons-712341" exists ...
I1209 01:56:39.846535 259666 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-712341"
I1209 01:56:39.847893 259666 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-712341"
I1209 01:56:39.847936 259666 host.go:66] Checking if "addons-712341" exists ...
I1209 01:56:39.849485 259666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1209 01:56:39.852876 259666 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
I1209 01:56:39.852978 259666 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1209 01:56:39.852998 259666 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1209 01:56:39.853531 259666 addons.go:239] Setting addon default-storageclass=true in "addons-712341"
I1209 01:56:39.853729 259666 host.go:66] Checking if "addons-712341" exists ...
I1209 01:56:39.855154 259666 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
I1209 01:56:39.855236 259666 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1209 01:56:39.855625 259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1209 01:56:39.855240 259666 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1209 01:56:39.855249 259666 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
I1209 01:56:39.855771 259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
W1209 01:56:39.855296 259666 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1209 01:56:39.856094 259666 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1209 01:56:39.856111 259666 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1209 01:56:39.856271 259666 host.go:66] Checking if "addons-712341" exists ...
I1209 01:56:39.857017 259666 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1209 01:56:39.857043 259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1209 01:56:39.857810 259666 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1209 01:56:39.857947 259666 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1209 01:56:39.858254 259666 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-712341"
I1209 01:56:39.858316 259666 host.go:66] Checking if "addons-712341" exists ...
I1209 01:56:39.858730 259666 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
I1209 01:56:39.859802 259666 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1209 01:56:39.859802 259666 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1209 01:56:39.859811 259666 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1209 01:56:39.859948 259666 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1209 01:56:39.860400 259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1209 01:56:39.860771 259666 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1209 01:56:39.860791 259666 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1209 01:56:39.860962 259666 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.5
I1209 01:56:39.860972 259666 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1209 01:56:39.860985 259666 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
I1209 01:56:39.861016 259666 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1209 01:56:39.861629 259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1209 01:56:39.861945 259666 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1209 01:56:39.861958 259666 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1209 01:56:39.861970 259666 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1209 01:56:39.861973 259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1209 01:56:39.862781 259666 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
I1209 01:56:39.862809 259666 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1209 01:56:39.862781 259666 out.go:179] - Using image docker.io/registry:3.0.0
I1209 01:56:39.862934 259666 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
I1209 01:56:39.863324 259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1209 01:56:39.863618 259666 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
I1209 01:56:39.865047 259666 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1209 01:56:39.865268 259666 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
I1209 01:56:39.865488 259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1209 01:56:39.865996 259666 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1209 01:56:39.866116 259666 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1209 01:56:39.866404 259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1209 01:56:39.866118 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.867682 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.867612 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.868521 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.868801 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:39.868880 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.868882 259666 out.go:179] - Using image docker.io/busybox:stable
I1209 01:56:39.868957 259666 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1209 01:56:39.869663 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:39.869700 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.869962 259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
I1209 01:56:39.870198 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:39.870249 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.870737 259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
I1209 01:56:39.871182 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:39.871224 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.871364 259666 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1209 01:56:39.871392 259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1209 01:56:39.871447 259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
I1209 01:56:39.872347 259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
I1209 01:56:39.872791 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.873759 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.874071 259666 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1209 01:56:39.875284 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:39.875336 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.875530 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:39.875577 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.875989 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.876193 259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
I1209 01:56:39.876235 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.876604 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.876636 259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
I1209 01:56:39.877077 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.877470 259666 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1209 01:56:39.877879 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:39.877955 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:39.877994 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.878213 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:39.878250 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.878319 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.878453 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:39.878485 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.878463 259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
I1209 01:56:39.878498 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.878723 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.878853 259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
I1209 01:56:39.879250 259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
I1209 01:56:39.879265 259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
I1209 01:56:39.880121 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:39.880158 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.880254 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:39.880282 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.880289 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.880515 259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
I1209 01:56:39.880731 259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
I1209 01:56:39.880877 259666 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1209 01:56:39.881108 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:39.881148 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.881410 259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
I1209 01:56:39.881640 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.882064 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:39.882093 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.882252 259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
I1209 01:56:39.884531 259666 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1209 01:56:39.886086 259666 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1209 01:56:39.887446 259666 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1209 01:56:39.887488 259666 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1209 01:56:39.890971 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.891705 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:39.891743 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:39.891948 259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
W1209 01:56:40.226476 259666 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44658->192.168.39.107:22: read: connection reset by peer
I1209 01:56:40.226526 259666 retry.go:31] will retry after 210.313621ms: ssh: handshake failed: read tcp 192.168.39.1:44658->192.168.39.107:22: read: connection reset by peer
I1209 01:56:40.721348 259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1209 01:56:40.765512 259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1209 01:56:40.794919 259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1209 01:56:40.827003 259666 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1209 01:56:40.827038 259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1209 01:56:40.873254 259666 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1209 01:56:40.873291 259666 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1209 01:56:40.927412 259666 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
I1209 01:56:40.927446 259666 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1209 01:56:40.952921 259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1209 01:56:40.982112 259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1209 01:56:40.997398 259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1209 01:56:41.011935 259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1209 01:56:41.022801 259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1209 01:56:41.028305 259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1209 01:56:41.043557 259666 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.199195114s)
I1209 01:56:41.043654 259666 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.194135982s)
I1209 01:56:41.043696 259666 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
I1209 01:56:41.043716 259666 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1209 01:56:41.043761 259666 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1209 01:56:41.043850 259666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1209 01:56:41.371264 259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
I1209 01:56:41.540489 259666 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1209 01:56:41.540531 259666 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1209 01:56:41.564199 259666 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
I1209 01:56:41.564232 259666 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1209 01:56:41.591615 259666 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1209 01:56:41.591656 259666 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1209 01:56:41.621118 259666 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
I1209 01:56:41.621152 259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1209 01:56:41.762637 259666 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1209 01:56:41.762693 259666 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1209 01:56:42.300348 259666 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1209 01:56:42.300384 259666 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1209 01:56:42.321836 259666 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
I1209 01:56:42.321868 259666 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1209 01:56:42.329733 259666 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1209 01:56:42.329782 259666 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1209 01:56:42.355572 259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1209 01:56:42.526693 259666 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1209 01:56:42.526731 259666 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1209 01:56:42.710948 259666 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
I1209 01:56:42.710982 259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1209 01:56:42.738385 259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1209 01:56:42.764950 259666 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1209 01:56:42.764985 259666 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1209 01:56:43.076339 259666 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1209 01:56:43.076372 259666 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1209 01:56:43.166119 259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1209 01:56:43.210899 259666 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1209 01:56:43.210947 259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1209 01:56:43.505346 259666 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1209 01:56:43.505377 259666 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1209 01:56:43.579477 259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.858079572s)
I1209 01:56:43.635789 259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1209 01:56:44.136928 259666 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1209 01:56:44.136959 259666 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1209 01:56:44.383525 259666 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1209 01:56:44.383559 259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1209 01:56:44.804448 259666 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1209 01:56:44.804485 259666 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1209 01:56:45.300454 259666 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1209 01:56:45.300486 259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1209 01:56:46.051879 259666 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1209 01:56:46.051930 259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1209 01:56:46.180692 259666 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1209 01:56:46.180734 259666 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1209 01:56:46.891289 259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1209 01:56:47.218859 259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.453285533s)
I1209 01:56:47.218963 259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.423990702s)
I1209 01:56:47.425891 259666 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1209 01:56:47.429205 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:47.429800 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:47.429852 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:47.430257 259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
I1209 01:56:48.132124 259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.179158567s)
I1209 01:56:48.132249 259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.134819653s)
I1209 01:56:48.132234 259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.150058507s)
I1209 01:56:48.132324 259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.120363165s)
I1209 01:56:48.296248 259666 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1209 01:56:48.587835 259666 addons.go:239] Setting addon gcp-auth=true in "addons-712341"
I1209 01:56:48.587919 259666 host.go:66] Checking if "addons-712341" exists ...
I1209 01:56:48.590030 259666 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1209 01:56:48.592581 259666 main.go:143] libmachine: domain addons-712341 has defined MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:48.593058 259666 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:8f:0e", ip: ""} in network mk-addons-712341: {Iface:virbr1 ExpiryTime:2025-12-09 02:56:08 +0000 UTC Type:0 Mac:52:54:00:c8:8f:0e Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:addons-712341 Clientid:01:52:54:00:c8:8f:0e}
I1209 01:56:48.593083 259666 main.go:143] libmachine: domain addons-712341 has defined IP address 192.168.39.107 and MAC address 52:54:00:c8:8f:0e in network mk-addons-712341
I1209 01:56:48.593259 259666 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22081-254936/.minikube/machines/addons-712341/id_rsa Username:docker}
I1209 01:56:50.113973 259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.09110869s)
I1209 01:56:50.114024 259666 addons.go:495] Verifying addon ingress=true in "addons-712341"
I1209 01:56:50.114050 259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (9.085706809s)
I1209 01:56:50.114159 259666 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.070372525s)
I1209 01:56:50.114117 259666 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (9.070229331s)
I1209 01:56:50.114214 259666 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1209 01:56:50.114252 259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (8.742939387s)
I1209 01:56:50.114372 259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.75875164s)
I1209 01:56:50.114406 259666 addons.go:495] Verifying addon registry=true in "addons-712341"
I1209 01:56:50.114509 259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.948343881s)
I1209 01:56:50.114456 259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.376026923s)
I1209 01:56:50.115349 259666 addons.go:495] Verifying addon metrics-server=true in "addons-712341"
I1209 01:56:50.115077 259666 node_ready.go:35] waiting up to 6m0s for node "addons-712341" to be "Ready" ...
I1209 01:56:50.115858 259666 out.go:179] * Verifying registry addon...
I1209 01:56:50.115862 259666 out.go:179] * Verifying ingress addon...
I1209 01:56:50.116778 259666 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-712341 service yakd-dashboard -n yakd-dashboard
I1209 01:56:50.118596 259666 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1209 01:56:50.118737 259666 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1209 01:56:50.160627 259666 node_ready.go:49] node "addons-712341" is "Ready"
I1209 01:56:50.160668 259666 node_ready.go:38] duration metric: took 45.298865ms for node "addons-712341" to be "Ready" ...
I1209 01:56:50.160692 259666 api_server.go:52] waiting for apiserver process to appear ...
I1209 01:56:50.160759 259666 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1209 01:56:50.184979 259666 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1209 01:56:50.185013 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:50.185073 259666 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1209 01:56:50.185094 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:50.613270 259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.977423664s)
W1209 01:56:50.613342 259666 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1209 01:56:50.613379 259666 retry.go:31] will retry after 331.842733ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1209 01:56:50.629391 259666 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-712341" context rescaled to 1 replicas
I1209 01:56:50.766495 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:50.770748 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:50.945417 259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1209 01:56:51.140135 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:51.141665 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:51.633657 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:51.641713 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:52.068836 259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.177445878s)
I1209 01:56:52.068871 259666 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.478812094s)
I1209 01:56:52.068888 259666 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-712341"
I1209 01:56:52.068961 259666 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.908175899s)
I1209 01:56:52.069086 259666 api_server.go:72] duration metric: took 12.224663964s to wait for apiserver process to appear ...
I1209 01:56:52.069104 259666 api_server.go:88] waiting for apiserver healthz status ...
I1209 01:56:52.069128 259666 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
I1209 01:56:52.070627 259666 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1209 01:56:52.070629 259666 out.go:179] * Verifying csi-hostpath-driver addon...
I1209 01:56:52.072619 259666 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1209 01:56:52.073428 259666 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1209 01:56:52.073533 259666 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1209 01:56:52.073555 259666 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1209 01:56:52.077068 259666 api_server.go:279] https://192.168.39.107:8443/healthz returned 200:
ok
I1209 01:56:52.085787 259666 api_server.go:141] control plane version: v1.34.2
I1209 01:56:52.085847 259666 api_server.go:131] duration metric: took 16.729057ms to wait for apiserver health ...
I1209 01:56:52.085863 259666 system_pods.go:43] waiting for kube-system pods to appear ...
I1209 01:56:52.099490 259666 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1209 01:56:52.099517 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:52.103755 259666 system_pods.go:59] 20 kube-system pods found
I1209 01:56:52.103804 259666 system_pods.go:61] "amd-gpu-device-plugin-v9zls" [be0f5b68-1efc-4f03-b19d-adfa034a57b3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1209 01:56:52.103815 259666 system_pods.go:61] "coredns-66bc5c9577-shdck" [d0f44c72-0768-4808-a1c0-509d3e328c38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1209 01:56:52.103838 259666 system_pods.go:61] "coredns-66bc5c9577-v5f2r" [524b3b94-0cfa-457a-aa87-bbd516f29864] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1209 01:56:52.103848 259666 system_pods.go:61] "csi-hostpath-attacher-0" [056c3e94-e378-4434-95ae-158383485f4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1209 01:56:52.103856 259666 system_pods.go:61] "csi-hostpath-resizer-0" [3267d67d-4d7e-4816-841d-91e30d091abe] Pending
I1209 01:56:52.103865 259666 system_pods.go:61] "csi-hostpathplugin-kdsd6" [8b4341b2-33bd-408a-8472-4546030ef449] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1209 01:56:52.103871 259666 system_pods.go:61] "etcd-addons-712341" [dc56fb0e-3d25-4ecf-b7ac-d8f252ba1e90] Running
I1209 01:56:52.103896 259666 system_pods.go:61] "kube-apiserver-addons-712341" [c5304e82-26dd-44bb-81f4-3e1fa4178b40] Running
I1209 01:56:52.103902 259666 system_pods.go:61] "kube-controller-manager-addons-712341" [fa245aed-3fab-4f15-bd4e-0bd87b0850a9] Running
I1209 01:56:52.103911 259666 system_pods.go:61] "kube-ingress-dns-minikube" [756114fc-819b-48c7-9b13-f0fb6eb36384] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1209 01:56:52.103918 259666 system_pods.go:61] "kube-proxy-vk4qc" [8b43011e-4293-431e-838d-88f45ea2837d] Running
I1209 01:56:52.103924 259666 system_pods.go:61] "kube-scheduler-addons-712341" [6c9c9db7-76fe-46f9-ab73-306c1f5cc488] Running
I1209 01:56:52.103931 259666 system_pods.go:61] "metrics-server-85b7d694d7-kkqs4" [84337421-94b2-47bc-a027-73f7b42030a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1209 01:56:52.103940 259666 system_pods.go:61] "nvidia-device-plugin-daemonset-44sbc" [046c49b7-0e2c-4126-bc6a-ba9c44dcdfeb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1209 01:56:52.103952 259666 system_pods.go:61] "registry-6b586f9694-kbblm" [2debdb6b-823b-4310-974e-3cf03104d154] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1209 01:56:52.103962 259666 system_pods.go:61] "registry-creds-764b6fb674-4th89" [757a10af-9961-47d1-a4fa-5480787fe593] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1209 01:56:52.103970 259666 system_pods.go:61] "registry-proxy-w94f7" [66b090e3-ac51-4b13-a537-2f07c2a6961d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1209 01:56:52.103984 259666 system_pods.go:61] "snapshot-controller-7d9fbc56b8-6mgx6" [d34b4c88-e09a-4259-96d5-43b960cb1543] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1209 01:56:52.103993 259666 system_pods.go:61] "snapshot-controller-7d9fbc56b8-78tv4" [cfe91410-68d2-43fb-8b8f-a73756bfdf68] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1209 01:56:52.104001 259666 system_pods.go:61] "storage-provisioner" [7f5f0da7-b773-470f-999a-a04b68b1cfbc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1209 01:56:52.104013 259666 system_pods.go:74] duration metric: took 18.141772ms to wait for pod list to return data ...
I1209 01:56:52.104028 259666 default_sa.go:34] waiting for default service account to be created ...
I1209 01:56:52.135500 259666 default_sa.go:45] found service account: "default"
I1209 01:56:52.135537 259666 default_sa.go:55] duration metric: took 31.496154ms for default service account to be created ...
I1209 01:56:52.135552 259666 system_pods.go:116] waiting for k8s-apps to be running ...
I1209 01:56:52.145010 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:52.196342 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:52.201428 259666 system_pods.go:86] 20 kube-system pods found
I1209 01:56:52.201472 259666 system_pods.go:89] "amd-gpu-device-plugin-v9zls" [be0f5b68-1efc-4f03-b19d-adfa034a57b3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1209 01:56:52.201483 259666 system_pods.go:89] "coredns-66bc5c9577-shdck" [d0f44c72-0768-4808-a1c0-509d3e328c38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1209 01:56:52.201495 259666 system_pods.go:89] "coredns-66bc5c9577-v5f2r" [524b3b94-0cfa-457a-aa87-bbd516f29864] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1209 01:56:52.201504 259666 system_pods.go:89] "csi-hostpath-attacher-0" [056c3e94-e378-4434-95ae-158383485f4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1209 01:56:52.201514 259666 system_pods.go:89] "csi-hostpath-resizer-0" [3267d67d-4d7e-4816-841d-91e30d091abe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1209 01:56:52.201522 259666 system_pods.go:89] "csi-hostpathplugin-kdsd6" [8b4341b2-33bd-408a-8472-4546030ef449] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1209 01:56:52.201527 259666 system_pods.go:89] "etcd-addons-712341" [dc56fb0e-3d25-4ecf-b7ac-d8f252ba1e90] Running
I1209 01:56:52.201534 259666 system_pods.go:89] "kube-apiserver-addons-712341" [c5304e82-26dd-44bb-81f4-3e1fa4178b40] Running
I1209 01:56:52.201542 259666 system_pods.go:89] "kube-controller-manager-addons-712341" [fa245aed-3fab-4f15-bd4e-0bd87b0850a9] Running
I1209 01:56:52.201547 259666 system_pods.go:89] "kube-ingress-dns-minikube" [756114fc-819b-48c7-9b13-f0fb6eb36384] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1209 01:56:52.201550 259666 system_pods.go:89] "kube-proxy-vk4qc" [8b43011e-4293-431e-838d-88f45ea2837d] Running
I1209 01:56:52.201554 259666 system_pods.go:89] "kube-scheduler-addons-712341" [6c9c9db7-76fe-46f9-ab73-306c1f5cc488] Running
I1209 01:56:52.201562 259666 system_pods.go:89] "metrics-server-85b7d694d7-kkqs4" [84337421-94b2-47bc-a027-73f7b42030a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1209 01:56:52.201570 259666 system_pods.go:89] "nvidia-device-plugin-daemonset-44sbc" [046c49b7-0e2c-4126-bc6a-ba9c44dcdfeb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1209 01:56:52.201577 259666 system_pods.go:89] "registry-6b586f9694-kbblm" [2debdb6b-823b-4310-974e-3cf03104d154] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1209 01:56:52.201588 259666 system_pods.go:89] "registry-creds-764b6fb674-4th89" [757a10af-9961-47d1-a4fa-5480787fe593] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1209 01:56:52.201600 259666 system_pods.go:89] "registry-proxy-w94f7" [66b090e3-ac51-4b13-a537-2f07c2a6961d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1209 01:56:52.201609 259666 system_pods.go:89] "snapshot-controller-7d9fbc56b8-6mgx6" [d34b4c88-e09a-4259-96d5-43b960cb1543] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1209 01:56:52.201620 259666 system_pods.go:89] "snapshot-controller-7d9fbc56b8-78tv4" [cfe91410-68d2-43fb-8b8f-a73756bfdf68] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1209 01:56:52.201626 259666 system_pods.go:89] "storage-provisioner" [7f5f0da7-b773-470f-999a-a04b68b1cfbc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1209 01:56:52.201638 259666 system_pods.go:126] duration metric: took 66.07735ms to wait for k8s-apps to be running ...
I1209 01:56:52.201649 259666 system_svc.go:44] waiting for kubelet service to be running ....
I1209 01:56:52.201711 259666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1209 01:56:52.244528 259666 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1209 01:56:52.244559 259666 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1209 01:56:52.442772 259666 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1209 01:56:52.442796 259666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1209 01:56:52.528225 259666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1209 01:56:52.584081 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:52.625928 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:52.627492 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:53.088457 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:53.124364 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:53.124382 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:53.462849 259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.51735524s)
I1209 01:56:53.462988 259666 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.261245572s)
I1209 01:56:53.463025 259666 system_svc.go:56] duration metric: took 1.261371822s WaitForService to wait for kubelet
I1209 01:56:53.463035 259666 kubeadm.go:587] duration metric: took 13.618618651s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1209 01:56:53.463054 259666 node_conditions.go:102] verifying NodePressure condition ...
I1209 01:56:53.469930 259666 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1209 01:56:53.469971 259666 node_conditions.go:123] node cpu capacity is 2
I1209 01:56:53.469995 259666 node_conditions.go:105] duration metric: took 6.936425ms to run NodePressure ...
I1209 01:56:53.470016 259666 start.go:242] waiting for startup goroutines ...
I1209 01:56:53.579625 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:53.626562 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:53.627154 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:54.124663 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:54.164045 259666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.635775951s)
I1209 01:56:54.165086 259666 addons.go:495] Verifying addon gcp-auth=true in "addons-712341"
I1209 01:56:54.167332 259666 out.go:179] * Verifying gcp-auth addon...
I1209 01:56:54.169256 259666 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1209 01:56:54.228125 259666 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1209 01:56:54.228165 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:54.228312 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:54.228415 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:54.583764 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:54.626180 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:54.629673 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:54.674218 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:55.079318 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:55.122497 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:55.126937 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:55.176208 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:55.585843 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:55.624730 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:55.627667 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:55.675040 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:56.085690 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:56.182494 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:56.182785 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:56.184878 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:56.585510 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:56.625752 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:56.627653 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:56.674152 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:57.082923 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:57.125784 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:57.126000 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:57.174918 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:57.582283 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:57.624039 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:57.624899 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:57.673941 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:58.080773 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:58.125288 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:58.126514 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:58.174577 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:58.578448 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:58.622695 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:58.623048 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:58.673298 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:59.077568 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:59.124412 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:59.126387 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:59.173095 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:56:59.578317 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:56:59.623742 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:56:59.623804 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:56:59.673610 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:00.080249 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:00.124743 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:00.125239 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:00.174588 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:00.581630 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:00.624448 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:00.624709 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:00.675753 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:01.085094 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:01.183312 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:01.184367 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:01.185752 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:01.580392 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:01.625258 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:01.626660 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:01.674266 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:02.077570 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:02.127591 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:02.129194 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:02.175322 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:02.578455 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:02.624665 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:02.627647 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:02.674309 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:03.317591 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:03.319517 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:03.323970 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:03.324244 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:03.579001 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:03.622700 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:03.622776 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:03.676569 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:04.078200 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:04.126401 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:04.126501 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:04.173419 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:04.583559 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:04.625547 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:04.684957 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:04.685418 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:05.081509 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:05.125315 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:05.127322 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:05.173833 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:05.580854 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:05.623663 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:05.625082 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:05.676468 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:06.079745 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:06.126007 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:06.126426 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:06.178248 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:06.584121 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:06.634742 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:06.649068 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:06.681007 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:07.084361 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:07.128353 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:07.128412 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:07.173323 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:07.584092 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:07.633654 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:07.633806 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:07.678535 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:08.085337 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:08.126575 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:08.132547 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:08.175852 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:08.589876 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:08.626639 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:08.626787 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:08.672952 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:09.143091 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:09.176941 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:09.176959 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:09.185991 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:09.590625 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:09.637991 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:09.638062 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:09.677672 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:10.087051 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:10.130759 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:10.131618 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:10.257927 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:10.578874 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:10.626288 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:10.633335 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:10.673402 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:11.079652 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:11.123742 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:11.124607 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:11.173135 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:11.896064 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:11.899711 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:11.900088 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:11.900562 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:12.077782 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:12.123717 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:12.125064 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:12.175222 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:12.581620 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:12.624466 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:12.624538 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:12.673023 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:13.084428 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:13.129773 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:13.129956 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:13.177768 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:13.580556 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:13.623621 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:13.633417 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:13.675291 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:14.173130 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:14.173229 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:14.173248 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:14.176502 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:14.579046 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:14.625230 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:14.626321 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:14.674887 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:15.080600 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:15.124969 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:15.125480 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:15.174341 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:15.584569 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:15.624229 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:15.624229 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:15.675367 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:16.079166 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:16.125215 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:16.126459 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:16.173694 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:16.581782 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:16.633992 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:16.634090 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:16.678706 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:17.077563 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:17.124238 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:17.124531 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:17.175249 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:17.580164 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:17.622263 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:17.627949 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:17.674354 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:18.078350 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:18.126218 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:18.126269 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:18.173354 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:18.582504 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:18.622382 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:18.623178 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:18.673881 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:19.079452 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:19.123375 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:19.123564 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:19.172784 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:19.578676 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:19.626702 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:19.626798 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:19.674988 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:20.080518 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:20.129199 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:20.131023 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:20.173853 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:20.581148 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:20.623180 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:20.625878 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:20.674561 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:21.078452 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:21.123608 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:21.123904 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:21.174227 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:21.579635 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:21.625195 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:21.627504 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:21.678491 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:22.077901 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:22.121957 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:22.124367 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:22.182359 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:22.583140 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:22.622887 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:22.625802 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:22.673757 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:23.078800 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:23.122393 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:23.123611 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:23.172676 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:23.577407 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:23.625637 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:23.626278 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:23.673247 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:24.078709 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:24.127634 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:24.127632 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:24.174434 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:24.629344 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:24.630105 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:24.630371 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:24.676222 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:25.082745 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:25.126230 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:25.126247 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:25.183127 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:25.580564 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:25.625047 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1209 01:57:25.626145 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:25.678386 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:26.079566 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:26.129986 259666 kapi.go:107] duration metric: took 36.011243554s to wait for kubernetes.io/minikube-addons=registry ...
I1209 01:57:26.134325 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:26.180286 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:26.578353 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:26.623355 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:26.678898 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:27.077651 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:27.123168 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:27.174032 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:27.578139 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:27.623299 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:27.673998 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:28.082115 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:28.128343 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:28.176091 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:28.583645 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:28.623219 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:28.683857 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:29.083973 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:29.126706 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:29.176159 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:29.582700 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:29.626043 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:29.674919 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:30.082193 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:30.126309 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:30.176544 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:30.579759 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:30.623749 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:30.674952 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:31.080543 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:31.123640 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:31.175213 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:31.578674 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:31.624425 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:31.673956 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:32.138550 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:32.140330 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:32.235545 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:32.580793 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:32.623258 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:32.681985 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:33.081798 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:33.128943 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:33.175314 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:33.579214 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:33.622333 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:33.675578 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:34.080089 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:34.122776 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:34.173142 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:34.581034 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:34.622760 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:34.673392 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:35.086300 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:35.181801 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:35.181992 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:35.580526 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:35.622457 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:35.676610 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:36.077417 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:36.125160 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:36.174409 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:36.579989 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:36.623532 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:36.675569 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:37.084603 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:37.125366 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:37.174307 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:37.578095 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:37.628535 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:37.676099 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:38.079122 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:38.123573 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:38.172649 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:38.580226 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:38.625574 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:38.674436 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:39.085026 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:39.128316 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:39.173217 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:39.577991 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:39.622752 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:39.673223 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:40.094578 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:40.207736 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:40.210228 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:40.582598 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:40.628218 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:40.676704 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:41.083672 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:41.127406 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:41.175103 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:41.581648 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:41.624383 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:41.679916 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:42.079057 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:42.124619 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:42.176517 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:42.579063 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:42.626238 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:42.679172 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:43.078200 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:43.122633 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:43.176914 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:43.579646 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:43.628515 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:43.729189 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:44.081665 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:44.124223 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:44.175118 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:44.580714 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:44.623029 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:44.673941 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:45.084222 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:45.125938 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:45.173752 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:45.577464 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:45.623683 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:45.676164 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:46.078254 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:46.122658 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:46.172598 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:46.577648 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:46.625502 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:46.676181 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:47.082069 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:47.183754 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:47.184962 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:47.580085 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1209 01:57:47.626889 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:47.681007 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:48.079598 259666 kapi.go:107] duration metric: took 56.006171821s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1209 01:57:48.123338 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:48.173677 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:48.623056 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:48.673218 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:49.123987 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:49.175564 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:49.622434 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:49.673473 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:50.123862 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:50.173136 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:50.623865 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:50.673933 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:51.124158 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:51.173322 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:51.624657 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:51.673452 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:52.123217 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:52.174668 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:52.622579 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:52.673319 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:53.123321 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:53.173660 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:53.622438 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:53.672675 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:54.125995 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:54.173526 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:54.625919 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:54.674525 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:55.124815 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:55.173817 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:55.624454 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:55.674130 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:56.127959 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:56.173127 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:56.625140 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:56.675445 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:57.125125 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:57.175856 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:57.623804 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:57.672563 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:58.123028 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:58.173345 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:58.625610 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:58.673361 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:59.129781 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:59.175313 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:57:59.627394 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:57:59.676246 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:58:00.124413 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:58:00.176034 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:58:00.623496 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:58:00.672914 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:58:01.124888 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:58:01.174951 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:58:01.623445 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:58:01.673413 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:58:02.123101 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:58:02.173216 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:58:02.623570 259666 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1209 01:58:02.673217 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:58:03.123634 259666 kapi.go:107] duration metric: took 1m13.005030885s to wait for app.kubernetes.io/name=ingress-nginx ...
I1209 01:58:03.172724 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:58:03.735701 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:58:04.177065 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:58:04.673910 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:58:05.174468 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:58:05.673784 259666 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1209 01:58:06.174417 259666 kapi.go:107] duration metric: took 1m12.00515853s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1209 01:58:06.176402 259666 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-712341 cluster.
I1209 01:58:06.177757 259666 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1209 01:58:06.179167 259666 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1209 01:58:06.181268 259666 out.go:179] * Enabled addons: amd-gpu-device-plugin, storage-provisioner, default-storageclass, nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner-rancher, registry-creds, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I1209 01:58:06.182601 259666 addons.go:530] duration metric: took 1m26.338266919s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner default-storageclass nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner-rancher registry-creds inspektor-gadget metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I1209 01:58:06.182655 259666 start.go:247] waiting for cluster config update ...
I1209 01:58:06.182683 259666 start.go:256] writing updated cluster config ...
I1209 01:58:06.182994 259666 ssh_runner.go:195] Run: rm -f paused
I1209 01:58:06.191625 259666 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1209 01:58:06.274995 259666 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-shdck" in "kube-system" namespace to be "Ready" or be gone ...
I1209 01:58:06.288662 259666 pod_ready.go:94] pod "coredns-66bc5c9577-shdck" is "Ready"
I1209 01:58:06.288705 259666 pod_ready.go:86] duration metric: took 13.669679ms for pod "coredns-66bc5c9577-shdck" in "kube-system" namespace to be "Ready" or be gone ...
I1209 01:58:06.292456 259666 pod_ready.go:83] waiting for pod "etcd-addons-712341" in "kube-system" namespace to be "Ready" or be gone ...
I1209 01:58:06.298582 259666 pod_ready.go:94] pod "etcd-addons-712341" is "Ready"
I1209 01:58:06.298627 259666 pod_ready.go:86] duration metric: took 6.137664ms for pod "etcd-addons-712341" in "kube-system" namespace to be "Ready" or be gone ...
I1209 01:58:06.301475 259666 pod_ready.go:83] waiting for pod "kube-apiserver-addons-712341" in "kube-system" namespace to be "Ready" or be gone ...
I1209 01:58:06.307937 259666 pod_ready.go:94] pod "kube-apiserver-addons-712341" is "Ready"
I1209 01:58:06.307975 259666 pod_ready.go:86] duration metric: took 6.464095ms for pod "kube-apiserver-addons-712341" in "kube-system" namespace to be "Ready" or be gone ...
I1209 01:58:06.310976 259666 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-712341" in "kube-system" namespace to be "Ready" or be gone ...
I1209 01:58:06.596027 259666 pod_ready.go:94] pod "kube-controller-manager-addons-712341" is "Ready"
I1209 01:58:06.596067 259666 pod_ready.go:86] duration metric: took 285.04526ms for pod "kube-controller-manager-addons-712341" in "kube-system" namespace to be "Ready" or be gone ...
I1209 01:58:06.797515 259666 pod_ready.go:83] waiting for pod "kube-proxy-vk4qc" in "kube-system" namespace to be "Ready" or be gone ...
I1209 01:58:07.261429 259666 pod_ready.go:94] pod "kube-proxy-vk4qc" is "Ready"
I1209 01:58:07.261457 259666 pod_ready.go:86] duration metric: took 463.913599ms for pod "kube-proxy-vk4qc" in "kube-system" namespace to be "Ready" or be gone ...
I1209 01:58:07.396631 259666 pod_ready.go:83] waiting for pod "kube-scheduler-addons-712341" in "kube-system" namespace to be "Ready" or be gone ...
I1209 01:58:07.796322 259666 pod_ready.go:94] pod "kube-scheduler-addons-712341" is "Ready"
I1209 01:58:07.796353 259666 pod_ready.go:86] duration metric: took 399.694019ms for pod "kube-scheduler-addons-712341" in "kube-system" namespace to be "Ready" or be gone ...
I1209 01:58:07.796368 259666 pod_ready.go:40] duration metric: took 1.604694946s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1209 01:58:07.843921 259666 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
I1209 01:58:07.845848 259666 out.go:179] * Done! kubectl is now configured to use "addons-712341" cluster and "default" namespace by default
==> CRI-O <==
Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.823487832Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a045cc05-aac7-4565-9364-4ade4ed238f9 name=/runtime.v1.RuntimeService/Version
Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.825144104Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f84c201c-14bd-4785-a195-c7c58f1d305b name=/runtime.v1.ImageService/ImageFsInfo
Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.826668321Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765245669826571470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545751,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f84c201c-14bd-4785-a195-c7c58f1d305b name=/runtime.v1.ImageService/ImageFsInfo
Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.828396361Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9acedacc-b0bb-49d1-8c57-6508017c2953 name=/runtime.v1.RuntimeService/ListContainers
Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.828472508Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9acedacc-b0bb-49d1-8c57-6508017c2953 name=/runtime.v1.RuntimeService/ListContainers
Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.828954204Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec15d060ae394906d39ef35df621d3eaa17eff94affb7d575b4004b993bb8387,PodSandboxId:62f0b8ae019de502b93b08128bc55fcf2b19162ed6caf21dd6c320accbc9cbcf,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765245526447302776,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 56885611-8b41-4e56-b6f9-8cc75bfdbfd9,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f587c28a8ce3c46b82eee4271e328e96588b54af9dcbc51395bc49b1c3cf5cb5,PodSandboxId:3c0b1ee3ed1034dfec65a6b682d4dc347ff9aea35f69421105196a0cda41475b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765245491470685947,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de8fd268-6e5a-4d89-89ef-8d352023a017,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca93564ea7dd662f18e42eeadee30ffbc06cd7c45ccdbea985fb8f36a4429a3d,PodSandboxId:f69cb0b70c06ea6d570b69b535d56ac56b7960c5302d4bddeefb01d520709a8f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765245481871542524,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-swb6n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1b839a85-e21a-4700-bdd5-73a4eb455656,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:4318789e13291a608e2e9a415455ab5f4461ae68099375bf94ff7c7e5d2d5375,PodSandboxId:fe31d2996503040215e0c01f0a810cbd2fe242d024000ad576cc84789df1ae40,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765245448039684284,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-d4sv2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 358a6b20-7ecd-43a5-bcd7-0ed30014543e,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3386cbf3ac7e87460fb2a04e7019500054049023b78cc5c926010c9b389697b,PodSandboxId:fc666f12e07f041eca7c227af7f72d42386b4dc46a40c2a77fe7fc1310b500eb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765245446327400051,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7bf82,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 94315895-0cf8-4263-8d0c-d3aa9b6dbe2b,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e2be0f8d767c790242e8b7b87b5c2c63447f49568e123be16a57d2df1139f42,PodSandboxId:73ad4dba12805d0d45c3ab7da1a7c244f5e83888673efc139454028d68f86c10,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765245435332705098,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756114fc-819b-48c7-9b13-f0fb6eb36384,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260555cd1575816836ddb050ffe5036a4263d87790b0362a7a833bdf6d25fdb5,PodSandboxId:10e24757d42b2a67cfec36df263a739da7031be1f40a8e8efc64cd3aa7a56a19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&Ima
geSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765245414418711010,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5f0da7-b773-470f-999a-a04b68b1cfbc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30dcac27e864e8939ded9c048c72e6aaf02e7fb23ca367d6998c8a3451001061,PodSandboxId:e1e470ec0036f562e2d3ba4058327fe7dba3b9556bc0ad8737f9a479e574df4a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Ima
ge:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765245410123959153,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-v9zls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be0f5b68-1efc-4f03-b19d-adfa034a57b3,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38631b1bcc4c2e7248750e6d1133052729f2e37827e330e72bf02c4a81d8f68b,PodSandboxId:0fcc9f2967f1dfe6617a59473ed7c4fc75c6c8bf8900d20d29a281cc7287610e,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765245401911110108,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-shdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0f44c72-0768-4808-a1c0-509d3e328c38,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6720b9b4382c48c64fdec86c2fd0596e617c82196ba5f4b5489e136a804fc6fb,PodSandboxId:c02b4d25a160be1076b454bcffb215cb8e5dcddd53d5702208a4f51964224f3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765245400953649001,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vk4qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b43011e-4293-431e-838d-88f45ea2837d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a3b82a29bc88ba34fdd0a63cfa749adabfbbce5ee66a7027143a11789da78ba,PodSandboxId:5cd8e6de89e438bec91b41e04acc882cf651e09c170a9d307c836acb3a5106fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765245387963030793,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce3ac49c9daa5dc52e59239b1562bf5a,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":102
57,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ecfa308eea4cdeebe3c9474876bba25ef96e20f8e8cf4305f0bf1a32112ee5b,PodSandboxId:d34cb6f659c30db478556b006b08926efdd4ac502cb7f85e396aa485f9802e5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765245387936682755,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 983b6a8f4c7cd5049430c8725659e085,},Annotations:map[string]string{io.kubernetes.container.hash:
e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:685da6ee8ce553eb479d57c5570e5ce09b45f9f091f643861572f0b00fa9f7c4,PodSandboxId:e610390c074e940470ed9c320800e40d3cfcdc6b51497edd31912cb1819914c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765245387876361124,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 7f6ea96060ca8daf2f4fa541fba3771c,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc82dcd02980dfe5cfcad067f06da24ccad8715782004643b6379245ab335497,PodSandboxId:cf79b9732b052ad72057f9fe0e7124efccda99cd600e66a4d7107351a6144328,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765245387790404061,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name
: etcd-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 617af1bb7b72d83eac8d928f752abda3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9acedacc-b0bb-49d1-8c57-6508017c2953 name=/runtime.v1.RuntimeService/ListContainers
Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.861780209Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=acdf9e16-99a6-4a13-bcc3-a9ca80fa3ccf name=/runtime.v1.RuntimeService/Version
Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.861883365Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=acdf9e16-99a6-4a13-bcc3-a9ca80fa3ccf name=/runtime.v1.RuntimeService/Version
Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.863343110Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79162913-9ec2-4d23-8f94-151473e80395 name=/runtime.v1.ImageService/ImageFsInfo
Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.864681586Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765245669864654053,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545751,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79162913-9ec2-4d23-8f94-151473e80395 name=/runtime.v1.ImageService/ImageFsInfo
Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.865716178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16f5baa3-d520-40fe-a770-554e981b3112 name=/runtime.v1.RuntimeService/ListContainers
Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.865773811Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16f5baa3-d520-40fe-a770-554e981b3112 name=/runtime.v1.RuntimeService/ListContainers
Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.866145718Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec15d060ae394906d39ef35df621d3eaa17eff94affb7d575b4004b993bb8387,PodSandboxId:62f0b8ae019de502b93b08128bc55fcf2b19162ed6caf21dd6c320accbc9cbcf,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765245526447302776,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 56885611-8b41-4e56-b6f9-8cc75bfdbfd9,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f587c28a8ce3c46b82eee4271e328e96588b54af9dcbc51395bc49b1c3cf5cb5,PodSandboxId:3c0b1ee3ed1034dfec65a6b682d4dc347ff9aea35f69421105196a0cda41475b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765245491470685947,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de8fd268-6e5a-4d89-89ef-8d352023a017,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca93564ea7dd662f18e42eeadee30ffbc06cd7c45ccdbea985fb8f36a4429a3d,PodSandboxId:f69cb0b70c06ea6d570b69b535d56ac56b7960c5302d4bddeefb01d520709a8f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765245481871542524,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-swb6n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1b839a85-e21a-4700-bdd5-73a4eb455656,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:4318789e13291a608e2e9a415455ab5f4461ae68099375bf94ff7c7e5d2d5375,PodSandboxId:fe31d2996503040215e0c01f0a810cbd2fe242d024000ad576cc84789df1ae40,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765245448039684284,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-d4sv2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 358a6b20-7ecd-43a5-bcd7-0ed30014543e,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3386cbf3ac7e87460fb2a04e7019500054049023b78cc5c926010c9b389697b,PodSandboxId:fc666f12e07f041eca7c227af7f72d42386b4dc46a40c2a77fe7fc1310b500eb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765245446327400051,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7bf82,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 94315895-0cf8-4263-8d0c-d3aa9b6dbe2b,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e2be0f8d767c790242e8b7b87b5c2c63447f49568e123be16a57d2df1139f42,PodSandboxId:73ad4dba12805d0d45c3ab7da1a7c244f5e83888673efc139454028d68f86c10,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765245435332705098,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756114fc-819b-48c7-9b13-f0fb6eb36384,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260555cd1575816836ddb050ffe5036a4263d87790b0362a7a833bdf6d25fdb5,PodSandboxId:10e24757d42b2a67cfec36df263a739da7031be1f40a8e8efc64cd3aa7a56a19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&Ima
geSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765245414418711010,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5f0da7-b773-470f-999a-a04b68b1cfbc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30dcac27e864e8939ded9c048c72e6aaf02e7fb23ca367d6998c8a3451001061,PodSandboxId:e1e470ec0036f562e2d3ba4058327fe7dba3b9556bc0ad8737f9a479e574df4a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Ima
ge:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765245410123959153,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-v9zls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be0f5b68-1efc-4f03-b19d-adfa034a57b3,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38631b1bcc4c2e7248750e6d1133052729f2e37827e330e72bf02c4a81d8f68b,PodSandboxId:0fcc9f2967f1dfe6617a59473ed7c4fc75c6c8bf8900d20d29a281cc7287610e,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765245401911110108,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-shdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0f44c72-0768-4808-a1c0-509d3e328c38,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6720b9b4382c48c64fdec86c2fd0596e617c82196ba5f4b5489e136a804fc6fb,PodSandboxId:c02b4d25a160be1076b454bcffb215cb8e5dcddd53d5702208a4f51964224f3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765245400953649001,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vk4qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b43011e-4293-431e-838d-88f45ea2837d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a3b82a29bc88ba34fdd0a63cfa749adabfbbce5ee66a7027143a11789da78ba,PodSandboxId:5cd8e6de89e438bec91b41e04acc882cf651e09c170a9d307c836acb3a5106fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765245387963030793,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce3ac49c9daa5dc52e59239b1562bf5a,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":102
57,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ecfa308eea4cdeebe3c9474876bba25ef96e20f8e8cf4305f0bf1a32112ee5b,PodSandboxId:d34cb6f659c30db478556b006b08926efdd4ac502cb7f85e396aa485f9802e5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765245387936682755,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 983b6a8f4c7cd5049430c8725659e085,},Annotations:map[string]string{io.kubernetes.container.hash:
e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:685da6ee8ce553eb479d57c5570e5ce09b45f9f091f643861572f0b00fa9f7c4,PodSandboxId:e610390c074e940470ed9c320800e40d3cfcdc6b51497edd31912cb1819914c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765245387876361124,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 7f6ea96060ca8daf2f4fa541fba3771c,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc82dcd02980dfe5cfcad067f06da24ccad8715782004643b6379245ab335497,PodSandboxId:cf79b9732b052ad72057f9fe0e7124efccda99cd600e66a4d7107351a6144328,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765245387790404061,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name
: etcd-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 617af1bb7b72d83eac8d928f752abda3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16f5baa3-d520-40fe-a770-554e981b3112 name=/runtime.v1.RuntimeService/ListContainers
Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.867720201Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=504bfdf9-7d37-4107-b18c-33f075f50f3f name=/runtime.v1.RuntimeService/ListPodSandbox
Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.868678055Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:893bfb8bf65bf7d1bcc7d23c10585d6867da8439ed5550dc87e0c249a63b8b91,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-lmwdx,Uid:c662e72a-cc05-4c42-9e4a-0643c57478d7,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765245668909198277,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-lmwdx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c662e72a-cc05-4c42-9e4a-0643c57478d7,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-09T02:01:08.581156960Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:62f0b8ae019de502b93b08128bc55fcf2b19162ed6caf21dd6c320accbc9cbcf,Metadata:&PodSandboxMetadata{Name:nginx,Uid:56885611-8b41-4e56-b6f9-8cc75bfdbfd9,Namespace:default,Attempt:0,},St
ate:SANDBOX_READY,CreatedAt:1765245520962164705,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 56885611-8b41-4e56-b6f9-8cc75bfdbfd9,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-09T01:58:40.569830689Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3c0b1ee3ed1034dfec65a6b682d4dc347ff9aea35f69421105196a0cda41475b,Metadata:&PodSandboxMetadata{Name:busybox,Uid:de8fd268-6e5a-4d89-89ef-8d352023a017,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765245488805507086,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de8fd268-6e5a-4d89-89ef-8d352023a017,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-09T01:58:08.481491691Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f69cb0b70c06ea6d570b6
9b535d56ac56b7960c5302d4bddeefb01d520709a8f,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-85d4c799dd-swb6n,Uid:1b839a85-e21a-4700-bdd5-73a4eb455656,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765245473962477828,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-swb6n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1b839a85-e21a-4700-bdd5-73a4eb455656,pod-template-hash: 85d4c799dd,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-09T01:56:49.727401401Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:10e24757d42b2a67cfec36df263a739da7031be1f40a8e8efc64cd3aa7a56a19,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7f5f0da7-b773-470f-999a-a04b68b1cfbc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,
CreatedAt:1765245409861266483,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5f0da7-b773-470f-999a-a04b68b1cfbc,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"D
irectory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-09T01:56:47.228337375Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:73ad4dba12805d0d45c3ab7da1a7c244f5e83888673efc139454028d68f86c10,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:756114fc-819b-48c7-9b13-f0fb6eb36384,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765245409842477875,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756114fc-819b-48c7-9b13-f0fb6eb36384,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"container
s\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"hostPort\":53,\"protocol\":\"UDP\"}],\"volumeMounts\":[{\"mountPath\":\"/config\",\"name\":\"minikube-ingress-dns-config-volume\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\",\"volumes\":[{\"configMap\":{\"name\":\"minikube-ingress-dns\"},\"name\":\"minikube-ingress-dns-config-volume\"}]}}\n,kubernetes.io/config.seen: 2025-12-09T01:56:46.978669710Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e1e470ec0036f562e2d3ba4058327fe7dba3b9556bc0ad8737f9a479e574df4a,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-v9zls,Uid:be0f5b68-1efc-4f03-b19d-adfa034a57b3,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1765245403890357077,Labels:map[string]string{controller-revision-hash: 7f87d6fd8d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-v9zls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be0f5b68-1efc-4f03-b19d-adfa034a57b3,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-09T01:56:43.553163358Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0fcc9f2967f1dfe6617a59473ed7c4fc75c6c8bf8900d20d29a281cc7287610e,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-shdck,Uid:d0f44c72-0768-4808-a1c0-509d3e328c38,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765245400699093153,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-shdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0f44c72-0768-4808-a1c0-509d3e328c38,k8s-app: kube-dns,po
d-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-09T01:56:40.355793570Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c02b4d25a160be1076b454bcffb215cb8e5dcddd53d5702208a4f51964224f3c,Metadata:&PodSandboxMetadata{Name:kube-proxy-vk4qc,Uid:8b43011e-4293-431e-838d-88f45ea2837d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765245400567993975,Labels:map[string]string{controller-revision-hash: 66d5f8d6f6,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-vk4qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b43011e-4293-431e-838d-88f45ea2837d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-09T01:56:40.240345782Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5cd8e6de89e438bec91b41e04acc882cf651e09c170a9d307c836acb3a5106fb,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-712341,Uid:ce3ac49c9daa5
dc52e59239b1562bf5a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765245387623289941,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce3ac49c9daa5dc52e59239b1562bf5a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ce3ac49c9daa5dc52e59239b1562bf5a,kubernetes.io/config.seen: 2025-12-09T01:56:26.713360509Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d34cb6f659c30db478556b006b08926efdd4ac502cb7f85e396aa485f9802e5e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-712341,Uid:983b6a8f4c7cd5049430c8725659e085,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765245387593526375,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-712341,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 983b6a8f4c7cd5049430c8725659e085,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 983b6a8f4c7cd5049430c8725659e085,kubernetes.io/config.seen: 2025-12-09T01:56:26.713361403Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e610390c074e940470ed9c320800e40d3cfcdc6b51497edd31912cb1819914c9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-712341,Uid:7f6ea96060ca8daf2f4fa541fba3771c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765245387479539170,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f6ea96060ca8daf2f4fa541fba3771c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.107:8443,kubernetes.io/config.hash: 7f6ea96060ca8daf2f4fa541fba3771c,kubernetes.io/config.seen: 2025-12-09T01:56:26.7
13359498Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cf79b9732b052ad72057f9fe0e7124efccda99cd600e66a4d7107351a6144328,Metadata:&PodSandboxMetadata{Name:etcd-addons-712341,Uid:617af1bb7b72d83eac8d928f752abda3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765245387477163891,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 617af1bb7b72d83eac8d928f752abda3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.107:2379,kubernetes.io/config.hash: 617af1bb7b72d83eac8d928f752abda3,kubernetes.io/config.seen: 2025-12-09T01:56:26.713356729Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=504bfdf9-7d37-4107-b18c-33f075f50f3f name=/runtime.v1.RuntimeService/ListPodSandbox
Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.870769072Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d35db87-611e-42ab-bbba-a23b50dc9b73 name=/runtime.v1.RuntimeService/ListContainers
Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.870827635Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d35db87-611e-42ab-bbba-a23b50dc9b73 name=/runtime.v1.RuntimeService/ListContainers
Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.871146584Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec15d060ae394906d39ef35df621d3eaa17eff94affb7d575b4004b993bb8387,PodSandboxId:62f0b8ae019de502b93b08128bc55fcf2b19162ed6caf21dd6c320accbc9cbcf,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765245526447302776,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 56885611-8b41-4e56-b6f9-8cc75bfdbfd9,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f587c28a8ce3c46b82eee4271e328e96588b54af9dcbc51395bc49b1c3cf5cb5,PodSandboxId:3c0b1ee3ed1034dfec65a6b682d4dc347ff9aea35f69421105196a0cda41475b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765245491470685947,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de8fd268-6e5a-4d89-89ef-8d352023a017,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca93564ea7dd662f18e42eeadee30ffbc06cd7c45ccdbea985fb8f36a4429a3d,PodSandboxId:f69cb0b70c06ea6d570b69b535d56ac56b7960c5302d4bddeefb01d520709a8f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765245481871542524,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-swb6n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1b839a85-e21a-4700-bdd5-73a4eb455656,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5e2be0f8d767c790242e8b7b87b5c2c63447f49568e123be16a57d2df1139f42,PodSandboxId:73ad4dba12805d0d45c3ab7da1a7c244f5e83888673efc139454028d68f86c10,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a
b53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765245435332705098,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756114fc-819b-48c7-9b13-f0fb6eb36384,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260555cd1575816836ddb050ffe5036a4263d87790b0362a7a833bdf6d25fdb5,PodSandboxId:10e24757d42b2a67cfec36df263a739da7031be1f40a8e8efc64cd3aa7a56a19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a30
2a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765245414418711010,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5f0da7-b773-470f-999a-a04b68b1cfbc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30dcac27e864e8939ded9c048c72e6aaf02e7fb23ca367d6998c8a3451001061,PodSandboxId:e1e470ec0036f562e2d3ba4058327fe7dba3b9556bc0ad8737f9a479e574df4a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166
c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765245410123959153,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-v9zls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be0f5b68-1efc-4f03-b19d-adfa034a57b3,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38631b1bcc4c2e7248750e6d1133052729f2e37827e330e72bf02c4a81d8f68b,PodSandboxId:0fcc9f2967f1dfe6617a59473ed7c4fc75c6c8bf8900d20d29a281cc7287610e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e532450
23b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765245401911110108,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-shdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0f44c72-0768-4808-a1c0-509d3e328c38,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6720b9b4382c48c64fdec86c2fd0596e617c82196ba5f4b5489e136a804fc6fb,PodSandboxId:c02b4d25a160be1076b454bcffb215cb8e5dcddd53d5702208a4f51964224f3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765245400953649001,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vk4qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b43011e-4293-431e-838d-88f45ea2837d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:4a3b82a29bc88ba34fdd0a63cfa749adabfbbce5ee66a7027143a11789da78ba,PodSandboxId:5cd8e6de89e438bec91b41e04acc882cf651e09c170a9d307c836acb3a5106fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765245387963030793,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce3ac49c9daa5dc52e59239b1562bf5a,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ecfa308eea4cdeebe3c9474876bba25ef96e20f8e8cf4305f0bf1a32112ee5b,PodSandboxId:d34cb6f659c30db478556b006b08926efdd4ac502cb7f85e396aa485f9802e5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765245387936682755,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 983b6a8f4c7cd5049430c8725659e085,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:685da6ee8ce553eb479d57c5570e5ce09b45f9f091f643861572f0b00fa9f7c4,PodSandboxId:e610390c074e940470ed9c320800e40d3cfcdc6b51497edd31912cb1819914c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765245387876361124,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f6ea96060ca8daf2f4fa541fba3771c,},Annotations:map[string]str
ing{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc82dcd02980dfe5cfcad067f06da24ccad8715782004643b6379245ab335497,PodSandboxId:cf79b9732b052ad72057f9fe0e7124efccda99cd600e66a4d7107351a6144328,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765245387790404061,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 617af1bb7b72d83eac8d928f752abda3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d35db87-611e-42ab-bbba-a23b50dc9b73 name=/runtime.v1.RuntimeService/ListContainers
Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.903479781Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1106c887-11b0-4761-8735-7d1657e1c114 name=/runtime.v1.RuntimeService/Version
Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.903631863Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1106c887-11b0-4761-8735-7d1657e1c114 name=/runtime.v1.RuntimeService/Version
Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.905032057Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=640edb5d-c4c9-494f-9517-da85091afac9 name=/runtime.v1.ImageService/ImageFsInfo
Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.906243464Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765245669906212981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545751,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=640edb5d-c4c9-494f-9517-da85091afac9 name=/runtime.v1.ImageService/ImageFsInfo
Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.907280238Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd3430a8-8132-44d0-9c56-56b81df6303e name=/runtime.v1.RuntimeService/ListContainers
Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.907400578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd3430a8-8132-44d0-9c56-56b81df6303e name=/runtime.v1.RuntimeService/ListContainers
Dec 09 02:01:09 addons-712341 crio[822]: time="2025-12-09 02:01:09.907788937Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec15d060ae394906d39ef35df621d3eaa17eff94affb7d575b4004b993bb8387,PodSandboxId:62f0b8ae019de502b93b08128bc55fcf2b19162ed6caf21dd6c320accbc9cbcf,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765245526447302776,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 56885611-8b41-4e56-b6f9-8cc75bfdbfd9,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f587c28a8ce3c46b82eee4271e328e96588b54af9dcbc51395bc49b1c3cf5cb5,PodSandboxId:3c0b1ee3ed1034dfec65a6b682d4dc347ff9aea35f69421105196a0cda41475b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765245491470685947,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de8fd268-6e5a-4d89-89ef-8d352023a017,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca93564ea7dd662f18e42eeadee30ffbc06cd7c45ccdbea985fb8f36a4429a3d,PodSandboxId:f69cb0b70c06ea6d570b69b535d56ac56b7960c5302d4bddeefb01d520709a8f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765245481871542524,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-swb6n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1b839a85-e21a-4700-bdd5-73a4eb455656,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:4318789e13291a608e2e9a415455ab5f4461ae68099375bf94ff7c7e5d2d5375,PodSandboxId:fe31d2996503040215e0c01f0a810cbd2fe242d024000ad576cc84789df1ae40,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765245448039684284,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-d4sv2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 358a6b20-7ecd-43a5-bcd7-0ed30014543e,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3386cbf3ac7e87460fb2a04e7019500054049023b78cc5c926010c9b389697b,PodSandboxId:fc666f12e07f041eca7c227af7f72d42386b4dc46a40c2a77fe7fc1310b500eb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765245446327400051,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7bf82,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 94315895-0cf8-4263-8d0c-d3aa9b6dbe2b,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e2be0f8d767c790242e8b7b87b5c2c63447f49568e123be16a57d2df1139f42,PodSandboxId:73ad4dba12805d0d45c3ab7da1a7c244f5e83888673efc139454028d68f86c10,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765245435332705098,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756114fc-819b-48c7-9b13-f0fb6eb36384,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260555cd1575816836ddb050ffe5036a4263d87790b0362a7a833bdf6d25fdb5,PodSandboxId:10e24757d42b2a67cfec36df263a739da7031be1f40a8e8efc64cd3aa7a56a19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&Ima
geSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765245414418711010,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5f0da7-b773-470f-999a-a04b68b1cfbc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30dcac27e864e8939ded9c048c72e6aaf02e7fb23ca367d6998c8a3451001061,PodSandboxId:e1e470ec0036f562e2d3ba4058327fe7dba3b9556bc0ad8737f9a479e574df4a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Ima
ge:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765245410123959153,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-v9zls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be0f5b68-1efc-4f03-b19d-adfa034a57b3,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38631b1bcc4c2e7248750e6d1133052729f2e37827e330e72bf02c4a81d8f68b,PodSandboxId:0fcc9f2967f1dfe6617a59473ed7c4fc75c6c8bf8900d20d29a281cc7287610e,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765245401911110108,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-shdck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0f44c72-0768-4808-a1c0-509d3e328c38,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6720b9b4382c48c64fdec86c2fd0596e617c82196ba5f4b5489e136a804fc6fb,PodSandboxId:c02b4d25a160be1076b454bcffb215cb8e5dcddd53d5702208a4f51964224f3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765245400953649001,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vk4qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b43011e-4293-431e-838d-88f45ea2837d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a3b82a29bc88ba34fdd0a63cfa749adabfbbce5ee66a7027143a11789da78ba,PodSandboxId:5cd8e6de89e438bec91b41e04acc882cf651e09c170a9d307c836acb3a5106fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765245387963030793,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce3ac49c9daa5dc52e59239b1562bf5a,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":102
57,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ecfa308eea4cdeebe3c9474876bba25ef96e20f8e8cf4305f0bf1a32112ee5b,PodSandboxId:d34cb6f659c30db478556b006b08926efdd4ac502cb7f85e396aa485f9802e5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765245387936682755,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 983b6a8f4c7cd5049430c8725659e085,},Annotations:map[string]string{io.kubernetes.container.hash:
e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:685da6ee8ce553eb479d57c5570e5ce09b45f9f091f643861572f0b00fa9f7c4,PodSandboxId:e610390c074e940470ed9c320800e40d3cfcdc6b51497edd31912cb1819914c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765245387876361124,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 7f6ea96060ca8daf2f4fa541fba3771c,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc82dcd02980dfe5cfcad067f06da24ccad8715782004643b6379245ab335497,PodSandboxId:cf79b9732b052ad72057f9fe0e7124efccda99cd600e66a4d7107351a6144328,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765245387790404061,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name
: etcd-addons-712341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 617af1bb7b72d83eac8d928f752abda3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dd3430a8-8132-44d0-9c56-56b81df6303e name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
ec15d060ae394 public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9 2 minutes ago Running nginx 0 62f0b8ae019de nginx default
f587c28a8ce3c gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 2 minutes ago Running busybox 0 3c0b1ee3ed103 busybox default
ca93564ea7dd6 registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad 3 minutes ago Running controller 0 f69cb0b70c06e ingress-nginx-controller-85d4c799dd-swb6n ingress-nginx
4318789e13291 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285 3 minutes ago Exited patch 0 fe31d29965030 ingress-nginx-admission-patch-d4sv2 ingress-nginx
f3386cbf3ac7e registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285 3 minutes ago Exited create 0 fc666f12e07f0 ingress-nginx-admission-create-7bf82 ingress-nginx
5e2be0f8d767c docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 3 minutes ago Running minikube-ingress-dns 0 73ad4dba12805 kube-ingress-dns-minikube kube-system
260555cd15758 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 10e24757d42b2 storage-provisioner kube-system
30dcac27e864e docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 e1e470ec0036f amd-gpu-device-plugin-v9zls kube-system
38631b1bcc4c2 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969 4 minutes ago Running coredns 0 0fcc9f2967f1d coredns-66bc5c9577-shdck kube-system
6720b9b4382c4 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45 4 minutes ago Running kube-proxy 0 c02b4d25a160b kube-proxy-vk4qc kube-system
4a3b82a29bc88 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8 4 minutes ago Running kube-controller-manager 0 5cd8e6de89e43 kube-controller-manager-addons-712341 kube-system
7ecfa308eea4c 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952 4 minutes ago Running kube-scheduler 0 d34cb6f659c30 kube-scheduler-addons-712341 kube-system
685da6ee8ce55 a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85 4 minutes ago Running kube-apiserver 0 e610390c074e9 kube-apiserver-addons-712341 kube-system
cc82dcd02980d a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1 4 minutes ago Running etcd 0 cf79b9732b052 etcd-addons-712341 kube-system
==> coredns [38631b1bcc4c2e7248750e6d1133052729f2e37827e330e72bf02c4a81d8f68b] <==
[INFO] 10.244.0.9:42296 - 55609 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000094114s
[INFO] 10.244.0.9:42296 - 21902 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000081263s
[INFO] 10.244.0.9:42296 - 64647 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000069065s
[INFO] 10.244.0.9:42296 - 17032 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000096282s
[INFO] 10.244.0.9:42296 - 59813 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000065799s
[INFO] 10.244.0.9:42296 - 11550 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000116661s
[INFO] 10.244.0.9:42296 - 2942 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000241862s
[INFO] 10.244.0.9:36256 - 62408 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000487727s
[INFO] 10.244.0.9:36256 - 62760 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000114583s
[INFO] 10.244.0.9:55003 - 12479 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00013171s
[INFO] 10.244.0.9:55003 - 12780 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000993857s
[INFO] 10.244.0.9:44776 - 55722 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000161759s
[INFO] 10.244.0.9:44776 - 55464 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000180221s
[INFO] 10.244.0.9:58912 - 22746 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00043327s
[INFO] 10.244.0.9:58912 - 22487 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000652323s
[INFO] 10.244.0.23:46020 - 3548 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000286907s
[INFO] 10.244.0.23:45275 - 59525 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00085293s
[INFO] 10.244.0.23:53151 - 9476 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000170215s
[INFO] 10.244.0.23:37553 - 36707 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127048s
[INFO] 10.244.0.23:60979 - 33304 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129896s
[INFO] 10.244.0.23:41952 - 29892 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000095083s
[INFO] 10.244.0.23:38807 - 23059 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.002495032s
[INFO] 10.244.0.23:55574 - 25677 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005315665s
[INFO] 10.244.0.27:51199 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000715236s
[INFO] 10.244.0.27:59123 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000210023s
==> describe nodes <==
Name: addons-712341
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-712341
kubernetes.io/os=linux
minikube.k8s.io/commit=604647ccc1f2cd4d60ec88f36255b328e04e507d
minikube.k8s.io/name=addons-712341
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_09T01_56_34_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-712341
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 09 Dec 2025 01:56:30 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-712341
AcquireTime: <unset>
RenewTime: Tue, 09 Dec 2025 02:01:09 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 09 Dec 2025 01:59:06 +0000 Tue, 09 Dec 2025 01:56:28 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 09 Dec 2025 01:59:06 +0000 Tue, 09 Dec 2025 01:56:28 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 09 Dec 2025 01:59:06 +0000 Tue, 09 Dec 2025 01:56:28 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 09 Dec 2025 01:59:06 +0000 Tue, 09 Dec 2025 01:56:35 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.107
Hostname: addons-712341
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
System Info:
Machine ID: 870ec28c5b8846bcb90887091429a736
System UUID: 870ec28c-5b88-46bc-b908-87091429a736
Boot ID: d4d81322-16b9-4840-86b9-308fe92e01c6
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.34.2
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (13 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m2s
default hello-world-app-5d498dc89-lmwdx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m30s
ingress-nginx ingress-nginx-controller-85d4c799dd-swb6n 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m21s
kube-system amd-gpu-device-plugin-v9zls 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m27s
kube-system coredns-66bc5c9577-shdck 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m30s
kube-system etcd-addons-712341 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m37s
kube-system kube-apiserver-addons-712341 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m37s
kube-system kube-controller-manager-addons-712341 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m38s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m24s
kube-system kube-proxy-vk4qc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m30s
kube-system kube-scheduler-addons-712341 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m39s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m23s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m28s kube-proxy
Normal Starting 4m44s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m44s (x8 over 4m44s) kubelet Node addons-712341 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m44s (x8 over 4m44s) kubelet Node addons-712341 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m44s (x7 over 4m44s) kubelet Node addons-712341 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m44s kubelet Updated Node Allocatable limit across pods
Normal Starting 4m37s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m37s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m36s kubelet Node addons-712341 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m36s kubelet Node addons-712341 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m36s kubelet Node addons-712341 status is now: NodeHasSufficientPID
Normal NodeReady 4m35s kubelet Node addons-712341 status is now: NodeReady
Normal RegisteredNode 4m32s node-controller Node addons-712341 event: Registered Node addons-712341 in Controller
==> dmesg <==
[ +0.036264] kauditd_printk_skb: 230 callbacks suppressed
[ +0.000044] kauditd_printk_skb: 456 callbacks suppressed
[Dec 9 01:57] kauditd_printk_skb: 40 callbacks suppressed
[ +5.814639] kauditd_printk_skb: 32 callbacks suppressed
[ +7.450317] kauditd_printk_skb: 26 callbacks suppressed
[ +5.017935] kauditd_printk_skb: 122 callbacks suppressed
[ +3.043245] kauditd_printk_skb: 75 callbacks suppressed
[ +5.222488] kauditd_printk_skb: 56 callbacks suppressed
[ +4.184648] kauditd_printk_skb: 126 callbacks suppressed
[ +0.000031] kauditd_printk_skb: 20 callbacks suppressed
[ +0.000051] kauditd_printk_skb: 29 callbacks suppressed
[Dec 9 01:58] kauditd_printk_skb: 53 callbacks suppressed
[ +2.685665] kauditd_printk_skb: 47 callbacks suppressed
[ +9.494229] kauditd_printk_skb: 17 callbacks suppressed
[ +5.990391] kauditd_printk_skb: 22 callbacks suppressed
[ +4.973091] kauditd_printk_skb: 38 callbacks suppressed
[ +0.399180] kauditd_printk_skb: 174 callbacks suppressed
[ +0.000029] kauditd_printk_skb: 197 callbacks suppressed
[ +3.500959] kauditd_printk_skb: 106 callbacks suppressed
[ +0.000044] kauditd_printk_skb: 35 callbacks suppressed
[Dec 9 01:59] kauditd_printk_skb: 65 callbacks suppressed
[ +10.678302] kauditd_printk_skb: 18 callbacks suppressed
[ +0.000308] kauditd_printk_skb: 10 callbacks suppressed
[ +7.791965] kauditd_printk_skb: 41 callbacks suppressed
[Dec 9 02:01] kauditd_printk_skb: 127 callbacks suppressed
==> etcd [cc82dcd02980dfe5cfcad067f06da24ccad8715782004643b6379245ab335497] <==
{"level":"warn","ts":"2025-12-09T01:57:11.885110Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-09T01:57:11.523196Z","time spent":"361.859922ms","remote":"127.0.0.1:34258","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:962 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
{"level":"warn","ts":"2025-12-09T01:57:11.885162Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-09T01:57:11.568969Z","time spent":"316.130105ms","remote":"127.0.0.1:34306","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
{"level":"warn","ts":"2025-12-09T01:57:11.885415Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"272.22843ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"warn","ts":"2025-12-09T01:57:11.885670Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"220.777985ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-09T01:57:11.885696Z","caller":"traceutil/trace.go:172","msg":"trace[686371842] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:971; }","duration":"220.805354ms","start":"2025-12-09T01:57:11.664885Z","end":"2025-12-09T01:57:11.885691Z","steps":["trace[686371842] 'agreement among raft nodes before linearized reading' (duration: 220.763473ms)"],"step_count":1}
{"level":"info","ts":"2025-12-09T01:57:11.885741Z","caller":"traceutil/trace.go:172","msg":"trace[1386898485] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:971; }","duration":"272.550807ms","start":"2025-12-09T01:57:11.613179Z","end":"2025-12-09T01:57:11.885730Z","steps":["trace[1386898485] 'agreement among raft nodes before linearized reading' (duration: 272.205022ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-09T01:57:11.885861Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"272.675111ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-09T01:57:11.885876Z","caller":"traceutil/trace.go:172","msg":"trace[1439734591] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:971; }","duration":"272.692165ms","start":"2025-12-09T01:57:11.613179Z","end":"2025-12-09T01:57:11.885872Z","steps":["trace[1439734591] 'agreement among raft nodes before linearized reading' (duration: 272.661957ms)"],"step_count":1}
{"level":"info","ts":"2025-12-09T01:57:14.156451Z","caller":"traceutil/trace.go:172","msg":"trace[1588852585] transaction","detail":"{read_only:false; response_revision:974; number_of_response:1; }","duration":"256.622976ms","start":"2025-12-09T01:57:13.899816Z","end":"2025-12-09T01:57:14.156439Z","steps":["trace[1588852585] 'process raft request' (duration: 256.266228ms)"],"step_count":1}
{"level":"info","ts":"2025-12-09T01:57:21.910872Z","caller":"traceutil/trace.go:172","msg":"trace[1193035101] transaction","detail":"{read_only:false; response_revision:996; number_of_response:1; }","duration":"151.454529ms","start":"2025-12-09T01:57:21.759405Z","end":"2025-12-09T01:57:21.910860Z","steps":["trace[1193035101] 'process raft request' (duration: 151.351921ms)"],"step_count":1}
{"level":"info","ts":"2025-12-09T01:57:22.381173Z","caller":"traceutil/trace.go:172","msg":"trace[1501272587] transaction","detail":"{read_only:false; response_revision:997; number_of_response:1; }","duration":"138.426806ms","start":"2025-12-09T01:57:22.242732Z","end":"2025-12-09T01:57:22.381159Z","steps":["trace[1501272587] 'process raft request' (duration: 138.314209ms)"],"step_count":1}
{"level":"info","ts":"2025-12-09T01:57:24.618967Z","caller":"traceutil/trace.go:172","msg":"trace[1682335110] transaction","detail":"{read_only:false; response_revision:1005; number_of_response:1; }","duration":"219.870394ms","start":"2025-12-09T01:57:24.399085Z","end":"2025-12-09T01:57:24.618955Z","steps":["trace[1682335110] 'process raft request' (duration: 219.380248ms)"],"step_count":1}
{"level":"info","ts":"2025-12-09T01:57:30.789092Z","caller":"traceutil/trace.go:172","msg":"trace[979625353] transaction","detail":"{read_only:false; response_revision:1044; number_of_response:1; }","duration":"120.317673ms","start":"2025-12-09T01:57:30.668762Z","end":"2025-12-09T01:57:30.789079Z","steps":["trace[979625353] 'process raft request' (duration: 120.199337ms)"],"step_count":1}
{"level":"info","ts":"2025-12-09T01:57:32.122185Z","caller":"traceutil/trace.go:172","msg":"trace[1451172396] linearizableReadLoop","detail":"{readStateIndex:1069; appliedIndex:1069; }","duration":"113.257584ms","start":"2025-12-09T01:57:32.008911Z","end":"2025-12-09T01:57:32.122169Z","steps":["trace[1451172396] 'read index received' (duration: 113.253359ms)","trace[1451172396] 'applied index is now lower than readState.Index' (duration: 3.586µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-09T01:57:32.122344Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.416456ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-09T01:57:32.122366Z","caller":"traceutil/trace.go:172","msg":"trace[585012504] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices; range_end:; response_count:0; response_revision:1045; }","duration":"113.483429ms","start":"2025-12-09T01:57:32.008878Z","end":"2025-12-09T01:57:32.122361Z","steps":["trace[585012504] 'agreement among raft nodes before linearized reading' (duration: 113.385292ms)"],"step_count":1}
{"level":"info","ts":"2025-12-09T01:57:32.124355Z","caller":"traceutil/trace.go:172","msg":"trace[148790987] transaction","detail":"{read_only:false; response_revision:1046; number_of_response:1; }","duration":"236.432391ms","start":"2025-12-09T01:57:31.887910Z","end":"2025-12-09T01:57:32.124342Z","steps":["trace[148790987] 'process raft request' (duration: 234.647039ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-09T01:57:59.017735Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"173.03494ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"warn","ts":"2025-12-09T01:57:59.018072Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.840959ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-09T01:57:59.020012Z","caller":"traceutil/trace.go:172","msg":"trace[1329056508] range","detail":"{range_begin:/registry/flowschemas; range_end:; response_count:0; response_revision:1169; }","duration":"156.575633ms","start":"2025-12-09T01:57:58.863215Z","end":"2025-12-09T01:57:59.019790Z","steps":["trace[1329056508] 'range keys from in-memory index tree' (duration: 154.784877ms)"],"step_count":1}
{"level":"info","ts":"2025-12-09T01:57:59.019528Z","caller":"traceutil/trace.go:172","msg":"trace[1915387367] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1169; }","duration":"174.767028ms","start":"2025-12-09T01:57:58.844672Z","end":"2025-12-09T01:57:59.019439Z","steps":["trace[1915387367] 'range keys from in-memory index tree' (duration: 173.023488ms)"],"step_count":1}
{"level":"info","ts":"2025-12-09T01:58:07.253874Z","caller":"traceutil/trace.go:172","msg":"trace[1884630933] transaction","detail":"{read_only:false; response_revision:1212; number_of_response:1; }","duration":"144.467179ms","start":"2025-12-09T01:58:07.109393Z","end":"2025-12-09T01:58:07.253860Z","steps":["trace[1884630933] 'process raft request' (duration: 143.645991ms)"],"step_count":1}
{"level":"info","ts":"2025-12-09T01:58:31.511333Z","caller":"traceutil/trace.go:172","msg":"trace[1635541547] transaction","detail":"{read_only:false; response_revision:1356; number_of_response:1; }","duration":"104.116387ms","start":"2025-12-09T01:58:31.407195Z","end":"2025-12-09T01:58:31.511311Z","steps":["trace[1635541547] 'process raft request' (duration: 103.851396ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-09T01:58:45.419703Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"212.098626ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:498"}
{"level":"info","ts":"2025-12-09T01:58:45.419806Z","caller":"traceutil/trace.go:172","msg":"trace[235655878] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1543; }","duration":"212.222586ms","start":"2025-12-09T01:58:45.207572Z","end":"2025-12-09T01:58:45.419794Z","steps":["trace[235655878] 'range keys from in-memory index tree' (duration: 211.879958ms)"],"step_count":1}
==> kernel <==
02:01:10 up 5 min, 0 users, load average: 0.50, 1.40, 0.75
Linux addons-712341 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec 8 03:04:10 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [685da6ee8ce553eb479d57c5570e5ce09b45f9f091f643861572f0b00fa9f7c4] <==
E1209 01:57:09.622530 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.183.186:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.183.186:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.183.186:443: connect: connection refused" logger="UnhandledError"
E1209 01:57:09.630707 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.183.186:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.183.186:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.183.186:443: connect: connection refused" logger="UnhandledError"
I1209 01:57:09.768552 1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E1209 01:58:17.694300 1 conn.go:339] Error on socket receive: read tcp 192.168.39.107:8443->192.168.39.1:51060: use of closed network connection
E1209 01:58:17.938866 1 conn.go:339] Error on socket receive: read tcp 192.168.39.107:8443->192.168.39.1:51092: use of closed network connection
I1209 01:58:27.234163 1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.147.171"}
I1209 01:58:40.397708 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
I1209 01:58:40.617188 1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.181.85"}
E1209 01:58:59.797474 1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
I1209 01:59:02.965310 1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I1209 01:59:10.643863 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I1209 01:59:32.281429 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1209 01:59:32.282261 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1209 01:59:32.328400 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1209 01:59:32.328462 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1209 01:59:32.334356 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1209 01:59:32.335741 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1209 01:59:32.356051 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1209 01:59:32.356108 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1209 01:59:32.379122 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1209 01:59:32.379176 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W1209 01:59:33.335029 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W1209 01:59:33.379202 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W1209 01:59:33.399656 1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
I1209 02:01:08.693028 1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.245.66"}
==> kube-controller-manager [4a3b82a29bc88ba34fdd0a63cfa749adabfbbce5ee66a7027143a11789da78ba] <==
I1209 01:59:39.318114 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
E1209 01:59:40.794263 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1209 01:59:40.795727 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1209 01:59:41.014658 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1209 01:59:41.015830 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1209 01:59:43.113962 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1209 01:59:43.115036 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1209 01:59:51.375832 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1209 01:59:51.377508 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1209 01:59:51.727942 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1209 01:59:51.729082 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1209 01:59:53.498247 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1209 01:59:53.499268 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1209 02:00:09.092535 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1209 02:00:09.093945 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1209 02:00:10.738639 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1209 02:00:10.739759 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1209 02:00:13.109683 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1209 02:00:13.111686 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1209 02:00:38.862688 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1209 02:00:38.863766 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1209 02:00:44.238065 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1209 02:00:44.239191 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1209 02:00:59.171108 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1209 02:00:59.172840 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
==> kube-proxy [6720b9b4382c48c64fdec86c2fd0596e617c82196ba5f4b5489e136a804fc6fb] <==
I1209 01:56:41.667158 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1209 01:56:41.770763 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1209 01:56:41.770817 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.107"]
E1209 01:56:41.770905 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1209 01:56:41.913406 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1209 01:56:41.913504 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1209 01:56:41.913533 1 server_linux.go:132] "Using iptables Proxier"
I1209 01:56:41.961751 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1209 01:56:41.962992 1 server.go:527] "Version info" version="v1.34.2"
I1209 01:56:41.963007 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1209 01:56:41.979972 1 config.go:200] "Starting service config controller"
I1209 01:56:41.980086 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1209 01:56:41.980114 1 config.go:106] "Starting endpoint slice config controller"
I1209 01:56:41.980117 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1209 01:56:41.980127 1 config.go:403] "Starting serviceCIDR config controller"
I1209 01:56:41.980131 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1209 01:56:41.986531 1 config.go:309] "Starting node config controller"
I1209 01:56:41.993120 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1209 01:56:41.993139 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1209 01:56:42.080271 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1209 01:56:42.080347 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1209 01:56:42.081078 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
==> kube-scheduler [7ecfa308eea4cdeebe3c9474876bba25ef96e20f8e8cf4305f0bf1a32112ee5b] <==
E1209 01:56:31.341216 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1209 01:56:31.341271 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1209 01:56:31.341332 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1209 01:56:31.341383 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1209 01:56:31.341498 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1209 01:56:31.341631 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1209 01:56:31.341692 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1209 01:56:31.345079 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1209 01:56:31.345218 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1209 01:56:31.345517 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1209 01:56:31.345628 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1209 01:56:31.345673 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1209 01:56:31.345671 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1209 01:56:31.345741 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1209 01:56:32.179215 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1209 01:56:32.220888 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1209 01:56:32.248416 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1209 01:56:32.248500 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1209 01:56:32.256655 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1209 01:56:32.259745 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1209 01:56:32.290689 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1209 01:56:32.324893 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1209 01:56:32.331508 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1209 01:56:32.411075 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
I1209 01:56:35.027431 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Dec 09 01:59:36 addons-712341 kubelet[1520]: I1209 01:59:36.015643 1520 scope.go:117] "RemoveContainer" containerID="bff4550f52da7913f39c959e5ee03c17b88732e9fee0da2208440c0a9a1f2b70"
Dec 09 01:59:36 addons-712341 kubelet[1520]: I1209 01:59:36.139811 1520 scope.go:117] "RemoveContainer" containerID="c7ae4df8cc2814561ba847252e260884dd3d7d2e529f06fd0777f671bfccfc58"
Dec 09 01:59:36 addons-712341 kubelet[1520]: I1209 01:59:36.261715 1520 scope.go:117] "RemoveContainer" containerID="4268b5239a5866c7c04b11e1a8e21f9cd0c8d1dbfd623f94d78d7e16e5646214"
Dec 09 01:59:36 addons-712341 kubelet[1520]: I1209 01:59:36.380910 1520 scope.go:117] "RemoveContainer" containerID="65e0709f2bd968c6ef390b89c68a0e5cc4e56c9c0b6a1c830a41130c46cdbfe1"
Dec 09 01:59:44 addons-712341 kubelet[1520]: E1209 01:59:44.181694 1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765245584177257750 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
Dec 09 01:59:44 addons-712341 kubelet[1520]: E1209 01:59:44.181719 1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765245584177257750 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
Dec 09 01:59:54 addons-712341 kubelet[1520]: E1209 01:59:54.184196 1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765245594183723092 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
Dec 09 01:59:54 addons-712341 kubelet[1520]: E1209 01:59:54.184228 1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765245594183723092 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
Dec 09 02:00:04 addons-712341 kubelet[1520]: E1209 02:00:04.187097 1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765245604186764194 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
Dec 09 02:00:04 addons-712341 kubelet[1520]: E1209 02:00:04.187127 1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765245604186764194 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
Dec 09 02:00:14 addons-712341 kubelet[1520]: E1209 02:00:14.189307 1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765245614188984154 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
Dec 09 02:00:14 addons-712341 kubelet[1520]: E1209 02:00:14.189328 1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765245614188984154 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
Dec 09 02:00:24 addons-712341 kubelet[1520]: E1209 02:00:24.192705 1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765245624192302673 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
Dec 09 02:00:24 addons-712341 kubelet[1520]: E1209 02:00:24.192729 1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765245624192302673 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
Dec 09 02:00:27 addons-712341 kubelet[1520]: I1209 02:00:27.867068 1520 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-v9zls" secret="" err="secret \"gcp-auth\" not found"
Dec 09 02:00:34 addons-712341 kubelet[1520]: E1209 02:00:34.195757 1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765245634195274055 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
Dec 09 02:00:34 addons-712341 kubelet[1520]: E1209 02:00:34.195816 1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765245634195274055 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
Dec 09 02:00:44 addons-712341 kubelet[1520]: E1209 02:00:44.198972 1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765245644198491809 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
Dec 09 02:00:44 addons-712341 kubelet[1520]: E1209 02:00:44.199006 1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765245644198491809 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
Dec 09 02:00:50 addons-712341 kubelet[1520]: I1209 02:00:50.865905 1520 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Dec 09 02:00:54 addons-712341 kubelet[1520]: E1209 02:00:54.201975 1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765245654201421824 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
Dec 09 02:00:54 addons-712341 kubelet[1520]: E1209 02:00:54.202018 1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765245654201421824 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
Dec 09 02:01:04 addons-712341 kubelet[1520]: E1209 02:01:04.205101 1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765245664204500677 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
Dec 09 02:01:04 addons-712341 kubelet[1520]: E1209 02:01:04.205308 1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765245664204500677 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545751} inodes_used:{value:187}}"
Dec 09 02:01:08 addons-712341 kubelet[1520]: I1209 02:01:08.677649 1520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x28j\" (UniqueName: \"kubernetes.io/projected/c662e72a-cc05-4c42-9e4a-0643c57478d7-kube-api-access-9x28j\") pod \"hello-world-app-5d498dc89-lmwdx\" (UID: \"c662e72a-cc05-4c42-9e4a-0643c57478d7\") " pod="default/hello-world-app-5d498dc89-lmwdx"
==> storage-provisioner [260555cd1575816836ddb050ffe5036a4263d87790b0362a7a833bdf6d25fdb5] <==
W1209 02:00:44.531410 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:00:46.535498 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:00:46.541649 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:00:48.546474 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:00:48.554210 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:00:50.558572 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:00:50.564751 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:00:52.568212 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:00:52.575686 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:00:54.580056 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:00:54.586887 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:00:56.589898 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:00:56.596446 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:00:58.600113 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:00:58.606136 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:01:00.610528 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:01:00.618113 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:01:02.622638 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:01:02.629642 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:01:04.632884 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:01:04.639076 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:01:06.646721 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:01:06.655279 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:01:08.675743 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1209 02:01:08.719481 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-712341 -n addons-712341
helpers_test.go:269: (dbg) Run: kubectl --context addons-712341 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-lmwdx ingress-nginx-admission-create-7bf82 ingress-nginx-admission-patch-d4sv2
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context addons-712341 describe pod hello-world-app-5d498dc89-lmwdx ingress-nginx-admission-create-7bf82 ingress-nginx-admission-patch-d4sv2
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-712341 describe pod hello-world-app-5d498dc89-lmwdx ingress-nginx-admission-create-7bf82 ingress-nginx-admission-patch-d4sv2: exit status 1 (80.538643ms)
-- stdout --
Name: hello-world-app-5d498dc89-lmwdx
Namespace: default
Priority: 0
Service Account: default
Node: addons-712341/192.168.39.107
Start Time: Tue, 09 Dec 2025 02:01:08 +0000
Labels: app=hello-world-app
pod-template-hash=5d498dc89
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-world-app-5d498dc89
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9x28j (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-9x28j:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3s default-scheduler Successfully assigned default/hello-world-app-5d498dc89-lmwdx to addons-712341
Normal Pulling 2s kubelet Pulling image "docker.io/kicbase/echo-server:1.0"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-7bf82" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-d4sv2" not found
** /stderr **
helpers_test.go:287: kubectl --context addons-712341 describe pod hello-world-app-5d498dc89-lmwdx ingress-nginx-admission-create-7bf82 ingress-nginx-admission-patch-d4sv2: exit status 1
addons_test.go:1113: (dbg) Run: out/minikube-linux-amd64 -p addons-712341 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1113: (dbg) Done: out/minikube-linux-amd64 -p addons-712341 addons disable ingress-dns --alsologtostderr -v=1: (1.087299583s)
addons_test.go:1113: (dbg) Run: out/minikube-linux-amd64 -p addons-712341 addons disable ingress --alsologtostderr -v=1
addons_test.go:1113: (dbg) Done: out/minikube-linux-amd64 -p addons-712341 addons disable ingress --alsologtostderr -v=1: (7.819309488s)
--- FAIL: TestAddons/parallel/Ingress (159.91s)