=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run: kubectl --context addons-246361 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run: kubectl --context addons-246361 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run: kubectl --context addons-246361 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [6b69c078-1088-484d-990b-d8794ed9b2c6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [6b69c078-1088-484d-990b-d8794ed9b2c6] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003936538s
I1213 09:14:26.702361 391877 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run: out/minikube-linux-amd64 -p addons-246361 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-246361 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m16.197128231s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run: kubectl --context addons-246361 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run: out/minikube-linux-amd64 -p addons-246361 ip
addons_test.go:301: (dbg) Run: nslookup hello-john.test 192.168.39.185
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-246361 -n addons-246361
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-amd64 -p addons-246361 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-246361 logs -n 25: (1.198393991s)
helpers_test.go:261: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ delete │ -p download-only-553660 │ download-only-553660 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
│ start │ --download-only -p binary-mirror-573687 --alsologtostderr --binary-mirror http://127.0.0.1:35927 --driver=kvm2 --container-runtime=crio │ binary-mirror-573687 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ │
│ delete │ -p binary-mirror-573687 │ binary-mirror-573687 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
│ addons │ enable dashboard -p addons-246361 │ addons-246361 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ │
│ addons │ disable dashboard -p addons-246361 │ addons-246361 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ │
│ start │ -p addons-246361 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-246361 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:13 UTC │
│ addons │ addons-246361 addons disable volcano --alsologtostderr -v=1 │ addons-246361 │ jenkins │ v1.37.0 │ 13 Dec 25 09:13 UTC │ 13 Dec 25 09:13 UTC │
│ addons │ addons-246361 addons disable gcp-auth --alsologtostderr -v=1 │ addons-246361 │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:14 UTC │
│ addons │ enable headlamp -p addons-246361 --alsologtostderr -v=1 │ addons-246361 │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:14 UTC │
│ addons │ addons-246361 addons disable metrics-server --alsologtostderr -v=1 │ addons-246361 │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:14 UTC │
│ ssh │ addons-246361 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-246361 │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ │
│ addons │ addons-246361 addons disable yakd --alsologtostderr -v=1 │ addons-246361 │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:14 UTC │
│ addons │ addons-246361 addons disable headlamp --alsologtostderr -v=1 │ addons-246361 │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:14 UTC │
│ ip │ addons-246361 ip │ addons-246361 │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:14 UTC │
│ addons │ addons-246361 addons disable registry --alsologtostderr -v=1 │ addons-246361 │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:14 UTC │
│ addons │ addons-246361 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-246361 │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:14 UTC │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-246361 │ addons-246361 │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:14 UTC │
│ addons │ addons-246361 addons disable registry-creds --alsologtostderr -v=1 │ addons-246361 │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:14 UTC │
│ addons │ addons-246361 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-246361 │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:14 UTC │
│ ssh │ addons-246361 ssh cat /opt/local-path-provisioner/pvc-b8114b46-aff7-41f0-9a17-c8dadafee4e6_default_test-pvc/file1 │ addons-246361 │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:14 UTC │
│ addons │ addons-246361 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-246361 │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:15 UTC │
│ addons │ addons-246361 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-246361 │ jenkins │ v1.37.0 │ 13 Dec 25 09:14 UTC │ 13 Dec 25 09:14 UTC │
│ addons │ addons-246361 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-246361 │ jenkins │ v1.37.0 │ 13 Dec 25 09:15 UTC │ 13 Dec 25 09:15 UTC │
│ addons │ addons-246361 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-246361 │ jenkins │ v1.37.0 │ 13 Dec 25 09:15 UTC │ 13 Dec 25 09:15 UTC │
│ ip │ addons-246361 ip │ addons-246361 │ jenkins │ v1.37.0 │ 13 Dec 25 09:16 UTC │ 13 Dec 25 09:16 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/13 09:11:46
Running on machine: ubuntu-20-agent-7
Binary: Built with gc go1.25.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1213 09:11:46.953001 392700 out.go:360] Setting OutFile to fd 1 ...
I1213 09:11:46.953255 392700 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:11:46.953265 392700 out.go:374] Setting ErrFile to fd 2...
I1213 09:11:46.953270 392700 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:11:46.953483 392700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-387918/.minikube/bin
I1213 09:11:46.954002 392700 out.go:368] Setting JSON to false
I1213 09:11:46.954894 392700 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3256,"bootTime":1765613851,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1213 09:11:46.954956 392700 start.go:143] virtualization: kvm guest
I1213 09:11:46.957081 392700 out.go:179] * [addons-246361] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1213 09:11:46.958544 392700 out.go:179] - MINIKUBE_LOCATION=22127
I1213 09:11:46.958548 392700 notify.go:221] Checking for updates...
I1213 09:11:46.961364 392700 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1213 09:11:46.962667 392700 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22127-387918/kubeconfig
I1213 09:11:46.964100 392700 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-387918/.minikube
I1213 09:11:46.965372 392700 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1213 09:11:46.966621 392700 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1213 09:11:46.968029 392700 driver.go:422] Setting default libvirt URI to qemu:///system
I1213 09:11:46.999316 392700 out.go:179] * Using the kvm2 driver based on user configuration
I1213 09:11:47.000473 392700 start.go:309] selected driver: kvm2
I1213 09:11:47.000496 392700 start.go:927] validating driver "kvm2" against <nil>
I1213 09:11:47.000508 392700 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1213 09:11:47.001189 392700 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1213 09:11:47.001452 392700 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1213 09:11:47.001477 392700 cni.go:84] Creating CNI manager for ""
I1213 09:11:47.001524 392700 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1213 09:11:47.001534 392700 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1213 09:11:47.001579 392700 start.go:353] cluster config:
{Name:addons-246361 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-246361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
I1213 09:11:47.001664 392700 iso.go:125] acquiring lock: {Name:mk4ce8bfab58620efe86d1c7a68d79ed9c81b6ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1213 09:11:47.003178 392700 out.go:179] * Starting "addons-246361" primary control-plane node in "addons-246361" cluster
I1213 09:11:47.004249 392700 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1213 09:11:47.004279 392700 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-387918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
I1213 09:11:47.004286 392700 cache.go:65] Caching tarball of preloaded images
I1213 09:11:47.004378 392700 preload.go:238] Found /home/jenkins/minikube-integration/22127-387918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1213 09:11:47.004389 392700 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
I1213 09:11:47.004695 392700 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/config.json ...
I1213 09:11:47.004719 392700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/config.json: {Name:mkf301320877bad44745f7d6b1089c83541b6e85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 09:11:47.004892 392700 start.go:360] acquireMachinesLock for addons-246361: {Name:mk911c6c71130df32abbe489ec2f7be251c727ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1213 09:11:47.004941 392700 start.go:364] duration metric: took 34.738µs to acquireMachinesLock for "addons-246361"
I1213 09:11:47.004960 392700 start.go:93] Provisioning new machine with config: &{Name:addons-246361 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-246361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1213 09:11:47.005019 392700 start.go:125] createHost starting for "" (driver="kvm2")
I1213 09:11:47.006513 392700 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1213 09:11:47.006683 392700 start.go:159] libmachine.API.Create for "addons-246361" (driver="kvm2")
I1213 09:11:47.006714 392700 client.go:173] LocalClient.Create starting
I1213 09:11:47.006817 392700 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem
I1213 09:11:47.114705 392700 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/cert.pem
I1213 09:11:47.172692 392700 main.go:143] libmachine: creating domain...
I1213 09:11:47.172717 392700 main.go:143] libmachine: creating network...
I1213 09:11:47.174220 392700 main.go:143] libmachine: found existing default network
I1213 09:11:47.174518 392700 main.go:143] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1213 09:11:47.175188 392700 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed88f0}
I1213 09:11:47.175312 392700 main.go:143] libmachine: defining private network:
<network>
<name>mk-addons-246361</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1213 09:11:47.181642 392700 main.go:143] libmachine: creating private network mk-addons-246361 192.168.39.0/24...
I1213 09:11:47.252168 392700 main.go:143] libmachine: private network mk-addons-246361 192.168.39.0/24 created
I1213 09:11:47.252468 392700 main.go:143] libmachine: <network>
<name>mk-addons-246361</name>
<uuid>e7255bda-accc-46cf-a38c-4f99131fe471</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:a7:cb:c4'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1213 09:11:47.252503 392700 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361 ...
I1213 09:11:47.252533 392700 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22127-387918/.minikube/cache/iso/amd64/minikube-v1.37.0-1765481609-22101-amd64.iso
I1213 09:11:47.252548 392700 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22127-387918/.minikube
I1213 09:11:47.252665 392700 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22127-387918/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22127-387918/.minikube/cache/iso/amd64/minikube-v1.37.0-1765481609-22101-amd64.iso...
I1213 09:11:47.516414 392700 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa...
I1213 09:11:47.672714 392700 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/addons-246361.rawdisk...
I1213 09:11:47.672766 392700 main.go:143] libmachine: Writing magic tar header
I1213 09:11:47.672802 392700 main.go:143] libmachine: Writing SSH key tar header
I1213 09:11:47.672879 392700 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361 ...
I1213 09:11:47.672939 392700 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361
I1213 09:11:47.672962 392700 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361 (perms=drwx------)
I1213 09:11:47.672972 392700 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22127-387918/.minikube/machines
I1213 09:11:47.672981 392700 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22127-387918/.minikube/machines (perms=drwxr-xr-x)
I1213 09:11:47.672992 392700 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22127-387918/.minikube
I1213 09:11:47.673010 392700 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22127-387918/.minikube (perms=drwxr-xr-x)
I1213 09:11:47.673020 392700 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22127-387918
I1213 09:11:47.673031 392700 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22127-387918 (perms=drwxrwxr-x)
I1213 09:11:47.673041 392700 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1213 09:11:47.673055 392700 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1213 09:11:47.673070 392700 main.go:143] libmachine: checking permissions on dir: /home/jenkins
I1213 09:11:47.673084 392700 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1213 09:11:47.673124 392700 main.go:143] libmachine: checking permissions on dir: /home
I1213 09:11:47.673139 392700 main.go:143] libmachine: skipping /home - not owner
I1213 09:11:47.673144 392700 main.go:143] libmachine: defining domain...
I1213 09:11:47.674523 392700 main.go:143] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-246361</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/addons-246361.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-246361'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1213 09:11:47.683915 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:ee:7c:cf in network default
I1213 09:11:47.684655 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:11:47.684677 392700 main.go:143] libmachine: starting domain...
I1213 09:11:47.684681 392700 main.go:143] libmachine: ensuring networks are active...
I1213 09:11:47.685511 392700 main.go:143] libmachine: Ensuring network default is active
I1213 09:11:47.685936 392700 main.go:143] libmachine: Ensuring network mk-addons-246361 is active
I1213 09:11:47.686562 392700 main.go:143] libmachine: getting domain XML...
I1213 09:11:47.687604 392700 main.go:143] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-246361</name>
<uuid>27894c69-ae15-4bb1-a762-2eea43d7ca9d</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/addons-246361.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:2b:24:a6'/>
<source network='mk-addons-246361'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:ee:7c:cf'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1213 09:11:48.993826 392700 main.go:143] libmachine: waiting for domain to start...
I1213 09:11:48.995270 392700 main.go:143] libmachine: domain is now running
I1213 09:11:48.995297 392700 main.go:143] libmachine: waiting for IP...
I1213 09:11:48.996059 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:11:48.996619 392700 main.go:143] libmachine: no network interface addresses found for domain addons-246361 (source=lease)
I1213 09:11:48.996633 392700 main.go:143] libmachine: trying to list again with source=arp
I1213 09:11:48.996967 392700 main.go:143] libmachine: unable to find current IP address of domain addons-246361 in network mk-addons-246361 (interfaces detected: [])
I1213 09:11:48.997028 392700 retry.go:31] will retry after 218.800416ms: waiting for domain to come up
I1213 09:11:49.217537 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:11:49.218123 392700 main.go:143] libmachine: no network interface addresses found for domain addons-246361 (source=lease)
I1213 09:11:49.218141 392700 main.go:143] libmachine: trying to list again with source=arp
I1213 09:11:49.218453 392700 main.go:143] libmachine: unable to find current IP address of domain addons-246361 in network mk-addons-246361 (interfaces detected: [])
I1213 09:11:49.218514 392700 retry.go:31] will retry after 270.803348ms: waiting for domain to come up
I1213 09:11:49.491302 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:11:49.491900 392700 main.go:143] libmachine: no network interface addresses found for domain addons-246361 (source=lease)
I1213 09:11:49.491922 392700 main.go:143] libmachine: trying to list again with source=arp
I1213 09:11:49.492318 392700 main.go:143] libmachine: unable to find current IP address of domain addons-246361 in network mk-addons-246361 (interfaces detected: [])
I1213 09:11:49.492361 392700 retry.go:31] will retry after 361.360348ms: waiting for domain to come up
I1213 09:11:49.855158 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:11:49.855771 392700 main.go:143] libmachine: no network interface addresses found for domain addons-246361 (source=lease)
I1213 09:11:49.855791 392700 main.go:143] libmachine: trying to list again with source=arp
I1213 09:11:49.856123 392700 main.go:143] libmachine: unable to find current IP address of domain addons-246361 in network mk-addons-246361 (interfaces detected: [])
I1213 09:11:49.856169 392700 retry.go:31] will retry after 523.235093ms: waiting for domain to come up
I1213 09:11:50.380880 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:11:50.381340 392700 main.go:143] libmachine: no network interface addresses found for domain addons-246361 (source=lease)
I1213 09:11:50.381358 392700 main.go:143] libmachine: trying to list again with source=arp
I1213 09:11:50.381604 392700 main.go:143] libmachine: unable to find current IP address of domain addons-246361 in network mk-addons-246361 (interfaces detected: [])
I1213 09:11:50.381649 392700 retry.go:31] will retry after 458.959376ms: waiting for domain to come up
I1213 09:11:50.842674 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:11:50.843207 392700 main.go:143] libmachine: no network interface addresses found for domain addons-246361 (source=lease)
I1213 09:11:50.843223 392700 main.go:143] libmachine: trying to list again with source=arp
I1213 09:11:50.843565 392700 main.go:143] libmachine: unable to find current IP address of domain addons-246361 in network mk-addons-246361 (interfaces detected: [])
I1213 09:11:50.843617 392700 retry.go:31] will retry after 910.968695ms: waiting for domain to come up
I1213 09:11:51.755732 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:11:51.756361 392700 main.go:143] libmachine: no network interface addresses found for domain addons-246361 (source=lease)
I1213 09:11:51.756379 392700 main.go:143] libmachine: trying to list again with source=arp
I1213 09:11:51.756683 392700 main.go:143] libmachine: unable to find current IP address of domain addons-246361 in network mk-addons-246361 (interfaces detected: [])
I1213 09:11:51.756726 392700 retry.go:31] will retry after 919.479091ms: waiting for domain to come up
I1213 09:11:52.677919 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:11:52.678554 392700 main.go:143] libmachine: no network interface addresses found for domain addons-246361 (source=lease)
I1213 09:11:52.678572 392700 main.go:143] libmachine: trying to list again with source=arp
I1213 09:11:52.678909 392700 main.go:143] libmachine: unable to find current IP address of domain addons-246361 in network mk-addons-246361 (interfaces detected: [])
I1213 09:11:52.678951 392700 retry.go:31] will retry after 945.042693ms: waiting for domain to come up
I1213 09:11:53.626197 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:11:53.626896 392700 main.go:143] libmachine: no network interface addresses found for domain addons-246361 (source=lease)
I1213 09:11:53.626916 392700 main.go:143] libmachine: trying to list again with source=arp
I1213 09:11:53.627220 392700 main.go:143] libmachine: unable to find current IP address of domain addons-246361 in network mk-addons-246361 (interfaces detected: [])
I1213 09:11:53.627262 392700 retry.go:31] will retry after 1.295865151s: waiting for domain to come up
I1213 09:11:54.924780 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:11:54.925369 392700 main.go:143] libmachine: no network interface addresses found for domain addons-246361 (source=lease)
I1213 09:11:54.925386 392700 main.go:143] libmachine: trying to list again with source=arp
I1213 09:11:54.925696 392700 main.go:143] libmachine: unable to find current IP address of domain addons-246361 in network mk-addons-246361 (interfaces detected: [])
I1213 09:11:54.925738 392700 retry.go:31] will retry after 2.283738815s: waiting for domain to come up
I1213 09:11:57.210973 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:11:57.211665 392700 main.go:143] libmachine: no network interface addresses found for domain addons-246361 (source=lease)
I1213 09:11:57.211717 392700 main.go:143] libmachine: trying to list again with source=arp
I1213 09:11:57.212170 392700 main.go:143] libmachine: unable to find current IP address of domain addons-246361 in network mk-addons-246361 (interfaces detected: [])
I1213 09:11:57.212214 392700 retry.go:31] will retry after 1.761254796s: waiting for domain to come up
I1213 09:11:58.976540 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:11:58.977240 392700 main.go:143] libmachine: no network interface addresses found for domain addons-246361 (source=lease)
I1213 09:11:58.977265 392700 main.go:143] libmachine: trying to list again with source=arp
I1213 09:11:58.977586 392700 main.go:143] libmachine: unable to find current IP address of domain addons-246361 in network mk-addons-246361 (interfaces detected: [])
I1213 09:11:58.977630 392700 retry.go:31] will retry after 2.837727411s: waiting for domain to come up
I1213 09:12:01.818582 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:01.819082 392700 main.go:143] libmachine: no network interface addresses found for domain addons-246361 (source=lease)
I1213 09:12:01.819098 392700 main.go:143] libmachine: trying to list again with source=arp
I1213 09:12:01.819392 392700 main.go:143] libmachine: unable to find current IP address of domain addons-246361 in network mk-addons-246361 (interfaces detected: [])
I1213 09:12:01.819433 392700 retry.go:31] will retry after 3.284023822s: waiting for domain to come up
I1213 09:12:05.107142 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:05.107836 392700 main.go:143] libmachine: domain addons-246361 has current primary IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:05.107852 392700 main.go:143] libmachine: found domain IP: 192.168.39.185
I1213 09:12:05.107860 392700 main.go:143] libmachine: reserving static IP address...
I1213 09:12:05.108333 392700 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-246361", mac: "52:54:00:2b:24:a6", ip: "192.168.39.185"} in network mk-addons-246361
I1213 09:12:05.312161 392700 main.go:143] libmachine: reserved static IP address 192.168.39.185 for domain addons-246361
I1213 09:12:05.312194 392700 main.go:143] libmachine: waiting for SSH...
I1213 09:12:05.312202 392700 main.go:143] libmachine: Getting to WaitForSSH function...
I1213 09:12:05.314966 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:05.315529 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:05.315569 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:05.315858 392700 main.go:143] libmachine: Using SSH client type: native
I1213 09:12:05.316182 392700 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.185 22 <nil> <nil>}
I1213 09:12:05.316197 392700 main.go:143] libmachine: About to run SSH command:
exit 0
I1213 09:12:05.428517 392700 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1213 09:12:05.428931 392700 main.go:143] libmachine: domain creation complete
I1213 09:12:05.430388 392700 machine.go:94] provisionDockerMachine start ...
I1213 09:12:05.433139 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:05.433592 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:05.433614 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:05.433805 392700 main.go:143] libmachine: Using SSH client type: native
I1213 09:12:05.434024 392700 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.185 22 <nil> <nil>}
I1213 09:12:05.434034 392700 main.go:143] libmachine: About to run SSH command:
hostname
I1213 09:12:05.546519 392700 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
I1213 09:12:05.546565 392700 buildroot.go:166] provisioning hostname "addons-246361"
I1213 09:12:05.549531 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:05.549940 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:05.549969 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:05.550169 392700 main.go:143] libmachine: Using SSH client type: native
I1213 09:12:05.550402 392700 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.185 22 <nil> <nil>}
I1213 09:12:05.550418 392700 main.go:143] libmachine: About to run SSH command:
sudo hostname addons-246361 && echo "addons-246361" | sudo tee /etc/hostname
I1213 09:12:05.688594 392700 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-246361
I1213 09:12:05.692571 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:05.693220 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:05.693262 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:05.693512 392700 main.go:143] libmachine: Using SSH client type: native
I1213 09:12:05.693738 392700 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.185 22 <nil> <nil>}
I1213 09:12:05.693779 392700 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-246361' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-246361/g' /etc/hosts;
else
echo '127.0.1.1 addons-246361' | sudo tee -a /etc/hosts;
fi
fi
I1213 09:12:05.813299 392700 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1213 09:12:05.813361 392700 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22127-387918/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-387918/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-387918/.minikube}
I1213 09:12:05.813392 392700 buildroot.go:174] setting up certificates
I1213 09:12:05.813403 392700 provision.go:84] configureAuth start
I1213 09:12:05.816473 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:05.816881 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:05.816913 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:05.819100 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:05.819451 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:05.819474 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:05.819589 392700 provision.go:143] copyHostCerts
I1213 09:12:05.819665 392700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-387918/.minikube/ca.pem (1078 bytes)
I1213 09:12:05.819838 392700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-387918/.minikube/cert.pem (1123 bytes)
I1213 09:12:05.819904 392700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-387918/.minikube/key.pem (1675 bytes)
I1213 09:12:05.819957 392700 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-387918/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca-key.pem org=jenkins.addons-246361 san=[127.0.0.1 192.168.39.185 addons-246361 localhost minikube]
I1213 09:12:05.945888 392700 provision.go:177] copyRemoteCerts
I1213 09:12:05.945962 392700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1213 09:12:05.948610 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:05.948996 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:05.949019 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:05.949203 392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
I1213 09:12:06.034349 392700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1213 09:12:06.063967 392700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1213 09:12:06.093197 392700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
I1213 09:12:06.123178 392700 provision.go:87] duration metric: took 309.747511ms to configureAuth
I1213 09:12:06.123207 392700 buildroot.go:189] setting minikube options for container-runtime
I1213 09:12:06.123410 392700 config.go:182] Loaded profile config "addons-246361": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 09:12:06.127028 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:06.127529 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:06.127571 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:06.127819 392700 main.go:143] libmachine: Using SSH client type: native
I1213 09:12:06.128034 392700 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.185 22 <nil> <nil>}
I1213 09:12:06.128050 392700 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1213 09:12:06.362169 392700 main.go:143] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1213 09:12:06.362202 392700 machine.go:97] duration metric: took 931.795471ms to provisionDockerMachine
I1213 09:12:06.362213 392700 client.go:176] duration metric: took 19.355494352s to LocalClient.Create
I1213 09:12:06.362233 392700 start.go:167] duration metric: took 19.355549599s to libmachine.API.Create "addons-246361"
I1213 09:12:06.362244 392700 start.go:293] postStartSetup for "addons-246361" (driver="kvm2")
I1213 09:12:06.362258 392700 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1213 09:12:06.362390 392700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1213 09:12:06.365396 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:06.365868 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:06.365898 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:06.366081 392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
I1213 09:12:06.456866 392700 ssh_runner.go:195] Run: cat /etc/os-release
I1213 09:12:06.462096 392700 info.go:137] Remote host: Buildroot 2025.02
I1213 09:12:06.462139 392700 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-387918/.minikube/addons for local assets ...
I1213 09:12:06.462228 392700 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-387918/.minikube/files for local assets ...
I1213 09:12:06.462255 392700 start.go:296] duration metric: took 100.003451ms for postStartSetup
I1213 09:12:06.465450 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:06.465846 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:06.465879 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:06.466120 392700 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/config.json ...
I1213 09:12:06.466372 392700 start.go:128] duration metric: took 19.461339964s to createHost
I1213 09:12:06.468454 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:06.468787 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:06.468815 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:06.468991 392700 main.go:143] libmachine: Using SSH client type: native
I1213 09:12:06.469212 392700 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil> [] 0s} 192.168.39.185 22 <nil> <nil>}
I1213 09:12:06.469223 392700 main.go:143] libmachine: About to run SSH command:
date +%s.%N
I1213 09:12:06.577761 392700 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765617126.545180359
I1213 09:12:06.577788 392700 fix.go:216] guest clock: 1765617126.545180359
I1213 09:12:06.577797 392700 fix.go:229] Guest: 2025-12-13 09:12:06.545180359 +0000 UTC Remote: 2025-12-13 09:12:06.466386774 +0000 UTC m=+19.562568069 (delta=78.793585ms)
I1213 09:12:06.577822 392700 fix.go:200] guest clock delta is within tolerance: 78.793585ms
I1213 09:12:06.577829 392700 start.go:83] releasing machines lock for "addons-246361", held for 19.572878213s
I1213 09:12:06.580889 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:06.581314 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:06.581353 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:06.581916 392700 ssh_runner.go:195] Run: cat /version.json
I1213 09:12:06.581997 392700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1213 09:12:06.585261 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:06.585295 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:06.585742 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:06.585756 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:06.585776 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:06.585775 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:06.585994 392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
I1213 09:12:06.585999 392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
I1213 09:12:06.688401 392700 ssh_runner.go:195] Run: systemctl --version
I1213 09:12:06.694893 392700 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1213 09:12:06.853274 392700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1213 09:12:06.859776 392700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1213 09:12:06.859850 392700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1213 09:12:06.880046 392700 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1213 09:12:06.880075 392700 start.go:496] detecting cgroup driver to use...
I1213 09:12:06.880145 392700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1213 09:12:06.900037 392700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1213 09:12:06.917073 392700 docker.go:218] disabling cri-docker service (if available) ...
I1213 09:12:06.917159 392700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1213 09:12:06.934984 392700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1213 09:12:06.951958 392700 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1213 09:12:07.099427 392700 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1213 09:12:07.312861 392700 docker.go:234] disabling docker service ...
I1213 09:12:07.312937 392700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1213 09:12:07.329221 392700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1213 09:12:07.345058 392700 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1213 09:12:07.498908 392700 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1213 09:12:07.638431 392700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1213 09:12:07.653883 392700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1213 09:12:07.676228 392700 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1213 09:12:07.676303 392700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1213 09:12:07.688569 392700 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1213 09:12:07.688655 392700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1213 09:12:07.703485 392700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1213 09:12:07.716470 392700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1213 09:12:07.729815 392700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1213 09:12:07.744045 392700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1213 09:12:07.756792 392700 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1213 09:12:07.777883 392700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1213 09:12:07.790572 392700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1213 09:12:07.801505 392700 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1213 09:12:07.801581 392700 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1213 09:12:07.822519 392700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1213 09:12:07.835368 392700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1213 09:12:07.982998 392700 ssh_runner.go:195] Run: sudo systemctl restart crio
I1213 09:12:08.095368 392700 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1213 09:12:08.095481 392700 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1213 09:12:08.101267 392700 start.go:564] Will wait 60s for crictl version
I1213 09:12:08.101403 392700 ssh_runner.go:195] Run: which crictl
I1213 09:12:08.105718 392700 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1213 09:12:08.141983 392700 start.go:580] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1213 09:12:08.142145 392700 ssh_runner.go:195] Run: crio --version
I1213 09:12:08.171160 392700 ssh_runner.go:195] Run: crio --version
I1213 09:12:08.201894 392700 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
I1213 09:12:08.206180 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:08.206583 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:08.206607 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:08.206826 392700 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1213 09:12:08.211573 392700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1213 09:12:08.227192 392700 kubeadm.go:884] updating cluster {Name:addons-246361 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-246361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1213 09:12:08.227381 392700 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1213 09:12:08.227450 392700 ssh_runner.go:195] Run: sudo crictl images --output json
I1213 09:12:08.265582 392700 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
I1213 09:12:08.265672 392700 ssh_runner.go:195] Run: which lz4
I1213 09:12:08.270230 392700 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1213 09:12:08.275131 392700 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1213 09:12:08.275178 392700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
I1213 09:12:09.527842 392700 crio.go:462] duration metric: took 1.257648109s to copy over tarball
I1213 09:12:09.527970 392700 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1213 09:12:11.010824 392700 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.482811625s)
I1213 09:12:11.010864 392700 crio.go:469] duration metric: took 1.482989092s to extract the tarball
I1213 09:12:11.010876 392700 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1213 09:12:11.047375 392700 ssh_runner.go:195] Run: sudo crictl images --output json
I1213 09:12:11.091571 392700 crio.go:514] all images are preloaded for cri-o runtime.
I1213 09:12:11.091605 392700 cache_images.go:86] Images are preloaded, skipping loading
I1213 09:12:11.091617 392700 kubeadm.go:935] updating node { 192.168.39.185 8443 v1.34.2 crio true true} ...
I1213 09:12:11.091754 392700 kubeadm.go:947] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-246361 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
[Install]
config:
{KubernetesVersion:v1.34.2 ClusterName:addons-246361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1213 09:12:11.091833 392700 ssh_runner.go:195] Run: crio config
I1213 09:12:11.139099 392700 cni.go:84] Creating CNI manager for ""
I1213 09:12:11.139129 392700 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1213 09:12:11.139153 392700 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1213 09:12:11.139177 392700 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-246361 NodeName:addons-246361 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1213 09:12:11.139296 392700 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.185
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-246361"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.185"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1213 09:12:11.139379 392700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
I1213 09:12:11.152394 392700 binaries.go:51] Found k8s binaries, skipping transfer
I1213 09:12:11.152483 392700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1213 09:12:11.165051 392700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
I1213 09:12:11.186035 392700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1213 09:12:11.206206 392700 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
I1213 09:12:11.227252 392700 ssh_runner.go:195] Run: grep 192.168.39.185 control-plane.minikube.internal$ /etc/hosts
I1213 09:12:11.231476 392700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.185 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1213 09:12:11.245876 392700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1213 09:12:11.388594 392700 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1213 09:12:11.419994 392700 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361 for IP: 192.168.39.185
I1213 09:12:11.420037 392700 certs.go:195] generating shared ca certs ...
I1213 09:12:11.420056 392700 certs.go:227] acquiring lock for ca certs: {Name:mkd63ae6418df38b62936a9f8faa40fdd87e4397 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 09:12:11.420235 392700 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-387918/.minikube/ca.key
I1213 09:12:11.490308 392700 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt ...
I1213 09:12:11.490357 392700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt: {Name:mkf3d78756412421f921ae57a0b47cb7979b33b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 09:12:11.490556 392700 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-387918/.minikube/ca.key ...
I1213 09:12:11.490569 392700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/.minikube/ca.key: {Name:mk7072f2cd64776d50132ee3ce97378f6d0dff62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 09:12:11.490677 392700 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-387918/.minikube/proxy-client-ca.key
I1213 09:12:11.528371 392700 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-387918/.minikube/proxy-client-ca.crt ...
I1213 09:12:11.528406 392700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/.minikube/proxy-client-ca.crt: {Name:mk577337d3eb3baea291abf0fe19ba51fb96fe3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 09:12:11.528602 392700 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-387918/.minikube/proxy-client-ca.key ...
I1213 09:12:11.528624 392700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/.minikube/proxy-client-ca.key: {Name:mk4db650447281a90b0762e0e393b5e90309227a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 09:12:11.528734 392700 certs.go:257] generating profile certs ...
I1213 09:12:11.528815 392700 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.key
I1213 09:12:11.528845 392700 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt with IP's: []
I1213 09:12:11.596658 392700 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt ...
I1213 09:12:11.596693 392700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.crt: {Name:mk607a1e5ee3c49e27b769dcb5a9e59fce4a91c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 09:12:11.596882 392700 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.key ...
I1213 09:12:11.596904 392700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/client.key: {Name:mk9bc892cca52ec705cdf46536ac1a653ead1c4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 09:12:11.597019 392700 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/apiserver.key.69166ee8
I1213 09:12:11.597047 392700 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/apiserver.crt.69166ee8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.185]
I1213 09:12:11.636467 392700 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/apiserver.crt.69166ee8 ...
I1213 09:12:11.636501 392700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/apiserver.crt.69166ee8: {Name:mk1b598603a8e21a8e6cc7ab13eaebd38083b673 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 09:12:11.636698 392700 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/apiserver.key.69166ee8 ...
I1213 09:12:11.636718 392700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/apiserver.key.69166ee8: {Name:mkc041ba359e7131e4f5ee39710ad799a6e00ad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 09:12:11.636827 392700 certs.go:382] copying /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/apiserver.crt.69166ee8 -> /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/apiserver.crt
I1213 09:12:11.636948 392700 certs.go:386] copying /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/apiserver.key.69166ee8 -> /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/apiserver.key
I1213 09:12:11.637043 392700 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/proxy-client.key
I1213 09:12:11.637072 392700 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/proxy-client.crt with IP's: []
I1213 09:12:11.763081 392700 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/proxy-client.crt ...
I1213 09:12:11.763114 392700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/proxy-client.crt: {Name:mk7584356f17525f94e9019268d0e8eafe4d8ec4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 09:12:11.763316 392700 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/proxy-client.key ...
I1213 09:12:11.763348 392700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/proxy-client.key: {Name:mke59c7288708e4ec1ea6621d04c16802aa70d96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 09:12:11.763562 392700 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca-key.pem (1675 bytes)
I1213 09:12:11.763608 392700 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/ca.pem (1078 bytes)
I1213 09:12:11.763631 392700 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/cert.pem (1123 bytes)
I1213 09:12:11.763652 392700 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-387918/.minikube/certs/key.pem (1675 bytes)
I1213 09:12:11.764312 392700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1213 09:12:11.795709 392700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1213 09:12:11.825153 392700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1213 09:12:11.854072 392700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1213 09:12:11.883184 392700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1213 09:12:11.912490 392700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1213 09:12:11.941768 392700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1213 09:12:11.972297 392700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/profiles/addons-246361/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1213 09:12:12.002013 392700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-387918/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1213 09:12:12.031784 392700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1213 09:12:12.052504 392700 ssh_runner.go:195] Run: openssl version
I1213 09:12:12.059406 392700 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1213 09:12:12.074753 392700 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1213 09:12:12.093351 392700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1213 09:12:12.098799 392700 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 09:12 /usr/share/ca-certificates/minikubeCA.pem
I1213 09:12:12.098882 392700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1213 09:12:12.109107 392700 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1213 09:12:12.122206 392700 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1213 09:12:12.134795 392700 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1213 09:12:12.142309 392700 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1213 09:12:12.142410 392700 kubeadm.go:401] StartCluster: {Name:addons-246361 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-246361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1213 09:12:12.142514 392700 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1213 09:12:12.142587 392700 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1213 09:12:12.179166 392700 cri.go:89] found id: ""
I1213 09:12:12.179251 392700 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1213 09:12:12.191347 392700 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1213 09:12:12.203307 392700 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1213 09:12:12.214947 392700 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1213 09:12:12.214973 392700 kubeadm.go:158] found existing configuration files:
I1213 09:12:12.215030 392700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1213 09:12:12.225728 392700 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1213 09:12:12.225801 392700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1213 09:12:12.237334 392700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1213 09:12:12.247869 392700 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1213 09:12:12.247932 392700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1213 09:12:12.261137 392700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1213 09:12:12.272479 392700 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1213 09:12:12.272550 392700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1213 09:12:12.284071 392700 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1213 09:12:12.294641 392700 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1213 09:12:12.294702 392700 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1213 09:12:12.306441 392700 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1213 09:12:12.449873 392700 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1213 09:12:24.708477 392700 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
I1213 09:12:24.708588 392700 kubeadm.go:319] [preflight] Running pre-flight checks
I1213 09:12:24.708723 392700 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1213 09:12:24.708877 392700 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1213 09:12:24.709023 392700 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1213 09:12:24.709116 392700 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1213 09:12:24.710915 392700 out.go:252] - Generating certificates and keys ...
I1213 09:12:24.711018 392700 kubeadm.go:319] [certs] Using existing ca certificate authority
I1213 09:12:24.711113 392700 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1213 09:12:24.711210 392700 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1213 09:12:24.711297 392700 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1213 09:12:24.711373 392700 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1213 09:12:24.711438 392700 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1213 09:12:24.711507 392700 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1213 09:12:24.711697 392700 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-246361 localhost] and IPs [192.168.39.185 127.0.0.1 ::1]
I1213 09:12:24.711823 392700 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1213 09:12:24.712014 392700 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-246361 localhost] and IPs [192.168.39.185 127.0.0.1 ::1]
I1213 09:12:24.712116 392700 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1213 09:12:24.712222 392700 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1213 09:12:24.712306 392700 kubeadm.go:319] [certs] Generating "sa" key and public key
I1213 09:12:24.712415 392700 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1213 09:12:24.712493 392700 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1213 09:12:24.712573 392700 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1213 09:12:24.712645 392700 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1213 09:12:24.712732 392700 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1213 09:12:24.712803 392700 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1213 09:12:24.712870 392700 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1213 09:12:24.712970 392700 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1213 09:12:24.714311 392700 out.go:252] - Booting up control plane ...
I1213 09:12:24.714439 392700 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1213 09:12:24.714507 392700 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1213 09:12:24.714561 392700 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1213 09:12:24.714670 392700 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1213 09:12:24.714806 392700 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1213 09:12:24.714912 392700 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1213 09:12:24.715003 392700 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1213 09:12:24.715035 392700 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1213 09:12:24.715156 392700 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1213 09:12:24.715240 392700 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1213 09:12:24.715300 392700 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001842246s
I1213 09:12:24.715397 392700 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1213 09:12:24.715482 392700 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.185:8443/livez
I1213 09:12:24.715580 392700 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1213 09:12:24.715648 392700 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1213 09:12:24.715745 392700 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.608913029s
I1213 09:12:24.715808 392700 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.935427412s
I1213 09:12:24.715893 392700 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001770255s
I1213 09:12:24.716042 392700 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1213 09:12:24.716168 392700 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1213 09:12:24.716224 392700 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1213 09:12:24.716405 392700 kubeadm.go:319] [mark-control-plane] Marking the node addons-246361 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1213 09:12:24.716471 392700 kubeadm.go:319] [bootstrap-token] Using token: xb92sz.u2mw76x31y0nlqob
I1213 09:12:24.718757 392700 out.go:252] - Configuring RBAC rules ...
I1213 09:12:24.718870 392700 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1213 09:12:24.718967 392700 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1213 09:12:24.719118 392700 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1213 09:12:24.719319 392700 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1213 09:12:24.719451 392700 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1213 09:12:24.719535 392700 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1213 09:12:24.719639 392700 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1213 09:12:24.719677 392700 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1213 09:12:24.719715 392700 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1213 09:12:24.719720 392700 kubeadm.go:319]
I1213 09:12:24.719785 392700 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1213 09:12:24.719791 392700 kubeadm.go:319]
I1213 09:12:24.719851 392700 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1213 09:12:24.719857 392700 kubeadm.go:319]
I1213 09:12:24.719876 392700 kubeadm.go:319] mkdir -p $HOME/.kube
I1213 09:12:24.719931 392700 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1213 09:12:24.719971 392700 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1213 09:12:24.719977 392700 kubeadm.go:319]
I1213 09:12:24.720031 392700 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1213 09:12:24.720041 392700 kubeadm.go:319]
I1213 09:12:24.720078 392700 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1213 09:12:24.720087 392700 kubeadm.go:319]
I1213 09:12:24.720131 392700 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1213 09:12:24.720190 392700 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1213 09:12:24.720245 392700 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1213 09:12:24.720251 392700 kubeadm.go:319]
I1213 09:12:24.720333 392700 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1213 09:12:24.720395 392700 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1213 09:12:24.720400 392700 kubeadm.go:319]
I1213 09:12:24.720469 392700 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xb92sz.u2mw76x31y0nlqob \
I1213 09:12:24.720562 392700 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:8bcd1fb3d3850626282ed6c823b047645feff2758552312516eb7c1e818bc63a \
I1213 09:12:24.720601 392700 kubeadm.go:319] --control-plane
I1213 09:12:24.720607 392700 kubeadm.go:319]
I1213 09:12:24.720710 392700 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1213 09:12:24.720728 392700 kubeadm.go:319]
I1213 09:12:24.720816 392700 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xb92sz.u2mw76x31y0nlqob \
I1213 09:12:24.720954 392700 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:8bcd1fb3d3850626282ed6c823b047645feff2758552312516eb7c1e818bc63a
I1213 09:12:24.720973 392700 cni.go:84] Creating CNI manager for ""
I1213 09:12:24.720987 392700 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1213 09:12:24.722564 392700 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1213 09:12:24.723835 392700 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1213 09:12:24.741347 392700 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1213 09:12:24.767123 392700 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1213 09:12:24.767276 392700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 09:12:24.767293 392700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-246361 minikube.k8s.io/updated_at=2025_12_13T09_12_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b minikube.k8s.io/name=addons-246361 minikube.k8s.io/primary=true
I1213 09:12:24.931941 392700 ops.go:34] apiserver oom_adj: -16
I1213 09:12:24.932072 392700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 09:12:25.433103 392700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 09:12:25.932874 392700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 09:12:26.432540 392700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 09:12:26.932611 392700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 09:12:27.432403 392700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 09:12:27.932915 392700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 09:12:28.432920 392700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 09:12:28.932540 392700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 09:12:29.432404 392700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1213 09:12:29.537697 392700 kubeadm.go:1114] duration metric: took 4.770512396s to wait for elevateKubeSystemPrivileges
I1213 09:12:29.537766 392700 kubeadm.go:403] duration metric: took 17.395370255s to StartCluster
I1213 09:12:29.537794 392700 settings.go:142] acquiring lock: {Name:mk59569246b81cd6fde64cc849a423eeb59f3563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 09:12:29.537948 392700 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22127-387918/kubeconfig
I1213 09:12:29.538369 392700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-387918/kubeconfig: {Name:mkc4c188214419e87992ca29ee1229c54fdde2b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 09:12:29.538694 392700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1213 09:12:29.538720 392700 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1213 09:12:29.538851 392700 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1213 09:12:29.538973 392700 config.go:182] Loaded profile config "addons-246361": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 09:12:29.539020 392700 addons.go:70] Setting yakd=true in profile "addons-246361"
I1213 09:12:29.539039 392700 addons.go:70] Setting cloud-spanner=true in profile "addons-246361"
I1213 09:12:29.539052 392700 addons.go:239] Setting addon yakd=true in "addons-246361"
I1213 09:12:29.539053 392700 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-246361"
I1213 09:12:29.539053 392700 addons.go:70] Setting registry-creds=true in profile "addons-246361"
I1213 09:12:29.539065 392700 addons.go:70] Setting gcp-auth=true in profile "addons-246361"
I1213 09:12:29.539074 392700 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-246361"
I1213 09:12:29.539088 392700 mustload.go:66] Loading cluster: addons-246361
I1213 09:12:29.539090 392700 host.go:66] Checking if "addons-246361" exists ...
I1213 09:12:29.539088 392700 addons.go:239] Setting addon registry-creds=true in "addons-246361"
I1213 09:12:29.539080 392700 addons.go:70] Setting default-storageclass=true in profile "addons-246361"
I1213 09:12:29.539126 392700 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-246361"
I1213 09:12:29.539161 392700 host.go:66] Checking if "addons-246361" exists ...
I1213 09:12:29.539283 392700 config.go:182] Loaded profile config "addons-246361": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 09:12:29.539409 392700 addons.go:70] Setting storage-provisioner=true in profile "addons-246361"
I1213 09:12:29.539451 392700 addons.go:239] Setting addon storage-provisioner=true in "addons-246361"
I1213 09:12:29.539638 392700 host.go:66] Checking if "addons-246361" exists ...
I1213 09:12:29.539319 392700 addons.go:70] Setting ingress=true in profile "addons-246361"
I1213 09:12:29.539940 392700 addons.go:239] Setting addon ingress=true in "addons-246361"
I1213 09:12:29.539998 392700 host.go:66] Checking if "addons-246361" exists ...
I1213 09:12:29.539057 392700 addons.go:239] Setting addon cloud-spanner=true in "addons-246361"
I1213 09:12:29.540069 392700 host.go:66] Checking if "addons-246361" exists ...
I1213 09:12:29.539735 392700 addons.go:70] Setting volcano=true in profile "addons-246361"
I1213 09:12:29.540722 392700 addons.go:239] Setting addon volcano=true in "addons-246361"
I1213 09:12:29.540758 392700 host.go:66] Checking if "addons-246361" exists ...
I1213 09:12:29.539749 392700 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-246361"
I1213 09:12:29.540929 392700 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-246361"
I1213 09:12:29.540958 392700 host.go:66] Checking if "addons-246361" exists ...
I1213 09:12:29.539763 392700 addons.go:70] Setting metrics-server=true in profile "addons-246361"
I1213 09:12:29.540994 392700 addons.go:239] Setting addon metrics-server=true in "addons-246361"
I1213 09:12:29.541022 392700 host.go:66] Checking if "addons-246361" exists ...
I1213 09:12:29.539772 392700 addons.go:70] Setting inspektor-gadget=true in profile "addons-246361"
I1213 09:12:29.541116 392700 addons.go:239] Setting addon inspektor-gadget=true in "addons-246361"
I1213 09:12:29.541142 392700 host.go:66] Checking if "addons-246361" exists ...
I1213 09:12:29.541399 392700 out.go:179] * Verifying Kubernetes components...
I1213 09:12:29.539795 392700 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-246361"
I1213 09:12:29.539807 392700 addons.go:70] Setting registry=true in profile "addons-246361"
I1213 09:12:29.539818 392700 addons.go:70] Setting volumesnapshots=true in profile "addons-246361"
I1213 09:12:29.539023 392700 addons.go:70] Setting ingress-dns=true in profile "addons-246361"
I1213 09:12:29.539032 392700 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-246361"
I1213 09:12:29.541480 392700 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-246361"
I1213 09:12:29.541538 392700 addons.go:239] Setting addon registry=true in "addons-246361"
I1213 09:12:29.541561 392700 host.go:66] Checking if "addons-246361" exists ...
I1213 09:12:29.541959 392700 addons.go:239] Setting addon volumesnapshots=true in "addons-246361"
I1213 09:12:29.542011 392700 host.go:66] Checking if "addons-246361" exists ...
I1213 09:12:29.541504 392700 host.go:66] Checking if "addons-246361" exists ...
I1213 09:12:29.542199 392700 addons.go:239] Setting addon ingress-dns=true in "addons-246361"
I1213 09:12:29.542239 392700 host.go:66] Checking if "addons-246361" exists ...
I1213 09:12:29.541515 392700 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-246361"
I1213 09:12:29.542403 392700 host.go:66] Checking if "addons-246361" exists ...
I1213 09:12:29.543020 392700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1213 09:12:29.545455 392700 host.go:66] Checking if "addons-246361" exists ...
I1213 09:12:29.547189 392700 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-246361"
I1213 09:12:29.547256 392700 host.go:66] Checking if "addons-246361" exists ...
I1213 09:12:29.547189 392700 addons.go:239] Setting addon default-storageclass=true in "addons-246361"
I1213 09:12:29.547348 392700 host.go:66] Checking if "addons-246361" exists ...
I1213 09:12:29.548099 392700 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.5
I1213 09:12:29.548167 392700 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1213 09:12:29.549023 392700 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
W1213 09:12:29.549119 392700 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1213 09:12:29.549911 392700 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
I1213 09:12:29.550037 392700 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1213 09:12:29.550054 392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1213 09:12:29.549929 392700 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
I1213 09:12:29.549938 392700 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
I1213 09:12:29.550289 392700 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1213 09:12:29.550926 392700 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1213 09:12:29.550965 392700 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1213 09:12:29.551001 392700 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1213 09:12:29.551369 392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1213 09:12:29.551814 392700 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1213 09:12:29.551827 392700 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
I1213 09:12:29.551838 392700 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1213 09:12:29.551869 392700 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1213 09:12:29.551888 392700 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
I1213 09:12:29.553051 392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1213 09:12:29.552361 392700 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1213 09:12:29.553131 392700 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1213 09:12:29.553575 392700 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
I1213 09:12:29.553619 392700 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1213 09:12:29.554058 392700 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1213 09:12:29.553642 392700 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
I1213 09:12:29.554358 392700 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1213 09:12:29.554376 392700 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1213 09:12:29.554424 392700 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
I1213 09:12:29.554932 392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
I1213 09:12:29.554430 392700 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1213 09:12:29.555031 392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1213 09:12:29.554444 392700 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1213 09:12:29.555162 392700 out.go:179] - Using image docker.io/busybox:stable
I1213 09:12:29.555158 392700 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1213 09:12:29.555223 392700 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1213 09:12:29.555808 392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1213 09:12:29.555920 392700 out.go:179] - Using image docker.io/registry:3.0.0
I1213 09:12:29.555961 392700 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1213 09:12:29.556353 392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1213 09:12:29.556808 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.557967 392700 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
I1213 09:12:29.558366 392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1213 09:12:29.558364 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:29.558400 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.557973 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.558609 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.558663 392700 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1213 09:12:29.558666 392700 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1213 09:12:29.559185 392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
I1213 09:12:29.559378 392700 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1213 09:12:29.560365 392700 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1213 09:12:29.560383 392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1213 09:12:29.560421 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:29.560451 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.560459 392700 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1213 09:12:29.560476 392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1213 09:12:29.560521 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:29.560714 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.561263 392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
I1213 09:12:29.561439 392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
I1213 09:12:29.562280 392700 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1213 09:12:29.563051 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.563304 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.564057 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:29.564094 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.564398 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.564654 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:29.564685 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.564708 392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
I1213 09:12:29.564850 392700 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1213 09:12:29.565074 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.565342 392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
I1213 09:12:29.565808 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:29.565840 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.566182 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.566403 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.566838 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:29.566872 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.567019 392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
I1213 09:12:29.567145 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.567513 392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
I1213 09:12:29.567537 392700 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1213 09:12:29.567838 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:29.567942 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:29.567988 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.568044 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.568149 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.568347 392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
I1213 09:12:29.568475 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:29.568501 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.568666 392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
I1213 09:12:29.569181 392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
I1213 09:12:29.569231 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.569557 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:29.569589 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.569814 392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
I1213 09:12:29.570087 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:29.570116 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.570231 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.570294 392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
I1213 09:12:29.570387 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.570745 392700 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1213 09:12:29.570846 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:29.570878 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:29.570914 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.570883 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.571130 392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
I1213 09:12:29.571434 392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
I1213 09:12:29.573751 392700 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1213 09:12:29.574842 392700 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1213 09:12:29.574859 392700 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1213 09:12:29.577229 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.577563 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:29.577584 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:29.577719 392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
W1213 09:12:29.933537 392700 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:50640->192.168.39.185:22: read: connection reset by peer
I1213 09:12:29.933586 392700 retry.go:31] will retry after 197.844594ms: ssh: handshake failed: read tcp 192.168.39.1:50640->192.168.39.185:22: read: connection reset by peer
I1213 09:12:30.338791 392700 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1213 09:12:30.338822 392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1213 09:12:30.499501 392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1213 09:12:30.510589 392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1213 09:12:30.523971 392700 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
I1213 09:12:30.524015 392700 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1213 09:12:30.526184 392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1213 09:12:30.549254 392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1213 09:12:30.593131 392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1213 09:12:30.594698 392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
I1213 09:12:30.599182 392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1213 09:12:30.612439 392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1213 09:12:30.625079 392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1213 09:12:30.633054 392700 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.094311028s)
I1213 09:12:30.633075 392700 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.0900139s)
I1213 09:12:30.633168 392700 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1213 09:12:30.633277 392700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1213 09:12:30.675055 392700 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1213 09:12:30.675092 392700 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1213 09:12:30.680424 392700 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1213 09:12:30.680449 392700 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1213 09:12:30.730911 392700 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
I1213 09:12:30.730935 392700 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1213 09:12:30.760367 392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1213 09:12:30.838715 392700 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1213 09:12:30.838743 392700 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1213 09:12:31.109144 392700 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
I1213 09:12:31.109165 392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1213 09:12:31.265733 392700 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1213 09:12:31.265775 392700 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1213 09:12:31.290008 392700 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1213 09:12:31.290077 392700 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1213 09:12:31.367762 392700 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
I1213 09:12:31.367795 392700 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1213 09:12:31.473065 392700 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1213 09:12:31.473101 392700 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1213 09:12:31.503287 392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1213 09:12:31.646622 392700 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1213 09:12:31.646654 392700 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1213 09:12:31.646715 392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1213 09:12:31.702602 392700 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
I1213 09:12:31.702635 392700 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1213 09:12:31.818406 392700 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1213 09:12:31.818437 392700 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1213 09:12:32.109802 392700 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
I1213 09:12:32.109829 392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1213 09:12:32.113835 392700 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1213 09:12:32.113860 392700 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1213 09:12:32.244077 392700 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1213 09:12:32.244112 392700 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1213 09:12:32.441387 392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1213 09:12:32.450657 392700 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1213 09:12:32.450682 392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1213 09:12:32.619691 392700 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1213 09:12:32.619729 392700 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1213 09:12:32.798421 392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1213 09:12:33.028942 392700 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1213 09:12:33.028982 392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1213 09:12:33.610564 392700 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1213 09:12:33.610597 392700 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1213 09:12:33.890055 392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.390506314s)
I1213 09:12:33.929974 392700 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1213 09:12:33.930001 392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1213 09:12:34.139582 392700 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1213 09:12:34.139623 392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1213 09:12:34.536656 392700 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1213 09:12:34.536694 392700 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1213 09:12:34.747808 392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1213 09:12:35.084304 392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.573673261s)
I1213 09:12:35.084422 392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.558198971s)
I1213 09:12:36.729713 392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.180408205s)
I1213 09:12:37.009362 392700 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1213 09:12:37.012488 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:37.012968 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:37.013014 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:37.013180 392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
I1213 09:12:37.245752 392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (6.651008917s)
I1213 09:12:37.245872 392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.652715161s)
I1213 09:12:37.246016 392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.646797926s)
I1213 09:12:37.246073 392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.633594881s)
I1213 09:12:37.317966 392700 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1213 09:12:37.470559 392700 addons.go:239] Setting addon gcp-auth=true in "addons-246361"
I1213 09:12:37.470632 392700 host.go:66] Checking if "addons-246361" exists ...
I1213 09:12:37.472656 392700 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1213 09:12:37.474964 392700 main.go:143] libmachine: domain addons-246361 has defined MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:37.475365 392700 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2b:24:a6", ip: ""} in network mk-addons-246361: {Iface:virbr1 ExpiryTime:2025-12-13 10:12:01 +0000 UTC Type:0 Mac:52:54:00:2b:24:a6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:addons-246361 Clientid:01:52:54:00:2b:24:a6}
I1213 09:12:37.475391 392700 main.go:143] libmachine: domain addons-246361 has defined IP address 192.168.39.185 and MAC address 52:54:00:2b:24:a6 in network mk-addons-246361
I1213 09:12:37.475549 392700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22127-387918/.minikube/machines/addons-246361/id_rsa Username:docker}
I1213 09:12:38.124537 392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.499407472s)
I1213 09:12:38.124591 392700 addons.go:495] Verifying addon ingress=true in "addons-246361"
I1213 09:12:38.124588 392700 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.49127635s)
I1213 09:12:38.124618 392700 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.491422156s)
I1213 09:12:38.124681 392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.364280896s)
I1213 09:12:38.124622 392700 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1213 09:12:38.124750 392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.621438006s)
I1213 09:12:38.124767 392700 addons.go:495] Verifying addon registry=true in "addons-246361"
I1213 09:12:38.125001 392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.478248802s)
I1213 09:12:38.125046 392700 addons.go:495] Verifying addon metrics-server=true in "addons-246361"
I1213 09:12:38.125103 392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.683676023s)
I1213 09:12:38.125682 392700 node_ready.go:35] waiting up to 6m0s for node "addons-246361" to be "Ready" ...
I1213 09:12:38.126988 392700 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-246361 service yakd-dashboard -n yakd-dashboard
I1213 09:12:38.126998 392700 out.go:179] * Verifying registry addon...
I1213 09:12:38.127019 392700 out.go:179] * Verifying ingress addon...
I1213 09:12:38.129134 392700 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1213 09:12:38.129145 392700 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1213 09:12:38.134101 392700 node_ready.go:49] node "addons-246361" is "Ready"
I1213 09:12:38.134126 392700 node_ready.go:38] duration metric: took 8.324074ms for node "addons-246361" to be "Ready" ...
I1213 09:12:38.134141 392700 api_server.go:52] waiting for apiserver process to appear ...
I1213 09:12:38.134193 392700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1213 09:12:38.149759 392700 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1213 09:12:38.149781 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:38.150916 392700 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1213 09:12:38.150943 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:38.658750 392700 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-246361" context rescaled to 1 replicas
I1213 09:12:38.666607 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:38.670387 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:39.102011 392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.303535198s)
W1213 09:12:39.102078 392700 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1213 09:12:39.102111 392700 retry.go:31] will retry after 359.726202ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1213 09:12:39.155064 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:39.155751 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:39.462728 392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1213 09:12:39.643966 392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.896074746s)
I1213 09:12:39.644026 392700 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-246361"
I1213 09:12:39.644023 392700 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.171325845s)
I1213 09:12:39.644079 392700 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.509861668s)
I1213 09:12:39.644115 392700 api_server.go:72] duration metric: took 10.105347669s to wait for apiserver process to appear ...
I1213 09:12:39.644128 392700 api_server.go:88] waiting for apiserver healthz status ...
I1213 09:12:39.644244 392700 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
I1213 09:12:39.646045 392700 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
I1213 09:12:39.646070 392700 out.go:179] * Verifying csi-hostpath-driver addon...
I1213 09:12:39.648025 392700 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1213 09:12:39.648484 392700 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1213 09:12:39.649153 392700 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1213 09:12:39.649181 392700 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1213 09:12:39.657088 392700 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
ok
I1213 09:12:39.661986 392700 api_server.go:141] control plane version: v1.34.2
I1213 09:12:39.662011 392700 api_server.go:131] duration metric: took 17.78801ms to wait for apiserver health ...
I1213 09:12:39.662020 392700 system_pods.go:43] waiting for kube-system pods to appear ...
I1213 09:12:39.710789 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:39.710847 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:39.711019 392700 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1213 09:12:39.711047 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:39.711409 392700 system_pods.go:59] 20 kube-system pods found
I1213 09:12:39.711452 392700 system_pods.go:61] "amd-gpu-device-plugin-pcr8k" [ae35898b-cac4-4c5d-b1f5-3de19fba17ef] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1213 09:12:39.711464 392700 system_pods.go:61] "coredns-66bc5c9577-225xg" [f6715b38-1f5c-45f6-ae76-9c279196f39b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1213 09:12:39.711473 392700 system_pods.go:61] "coredns-66bc5c9577-x9vlt" [e3722310-4cbe-4697-8045-c8353e07f242] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1213 09:12:39.711482 392700 system_pods.go:61] "csi-hostpath-attacher-0" [9d647a4f-c7a0-4cb6-972a-ee1caa579994] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1213 09:12:39.711486 392700 system_pods.go:61] "csi-hostpath-resizer-0" [0f6f307e-148c-4651-b4f6-3f3f1c171223] Pending
I1213 09:12:39.711495 392700 system_pods.go:61] "csi-hostpathplugin-lcmz2" [57b68f56-3f72-481e-a5dc-48874663d2b0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1213 09:12:39.711498 392700 system_pods.go:61] "etcd-addons-246361" [9c6ee6ef-dcf1-4eb6-843f-bbe57ee104d0] Running
I1213 09:12:39.711502 392700 system_pods.go:61] "kube-apiserver-addons-246361" [95cfa299-af07-4241-99e6-f974e0615596] Running
I1213 09:12:39.711506 392700 system_pods.go:61] "kube-controller-manager-addons-246361" [598b762c-7498-4f89-8bad-8c38caaf259f] Running
I1213 09:12:39.711526 392700 system_pods.go:61] "kube-ingress-dns-minikube" [3d548ec6-ac97-4b00-a992-cf50e0728d3c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1213 09:12:39.711535 392700 system_pods.go:61] "kube-proxy-f6vpr" [b60db149-95ea-4d92-88d4-958521a5cf75] Running
I1213 09:12:39.711541 392700 system_pods.go:61] "kube-scheduler-addons-246361" [6768514c-186b-4e66-bb4d-e4e91b025fb2] Running
I1213 09:12:39.711549 392700 system_pods.go:61] "metrics-server-85b7d694d7-pglv5" [ce676a7b-70bb-4524-b292-8a00796b0425] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1213 09:12:39.711558 392700 system_pods.go:61] "nvidia-device-plugin-daemonset-ghprj" [64bd87e7-7e06-4465-abb1-e27282853105] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1213 09:12:39.711570 392700 system_pods.go:61] "registry-6b586f9694-4vn9j" [0ffa6230-ba82-4c5a-bfd3-a4c73acdce35] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1213 09:12:39.711585 392700 system_pods.go:61] "registry-creds-764b6fb674-9h8mr" [9b8507b1-f028-4a81-8e59-5773d4e71038] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1213 09:12:39.711590 392700 system_pods.go:61] "registry-proxy-q8xvn" [6c738182-6c24-4d8e-acc8-25d9eae8cfbd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1213 09:12:39.711596 392700 system_pods.go:61] "snapshot-controller-7d9fbc56b8-g7rgv" [bebdd078-f41c-4293-a21f-61f2269782c8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1213 09:12:39.711602 392700 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xps7r" [07924ab0-91ea-41fa-bf06-3b4cc735fdae] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1213 09:12:39.711607 392700 system_pods.go:61] "storage-provisioner" [9b05e28d-a4a6-4e90-af0c-bf01fd93b1e1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1213 09:12:39.711615 392700 system_pods.go:74] duration metric: took 49.575503ms to wait for pod list to return data ...
I1213 09:12:39.711630 392700 default_sa.go:34] waiting for default service account to be created ...
I1213 09:12:39.724425 392700 default_sa.go:45] found service account: "default"
I1213 09:12:39.724455 392700 default_sa.go:55] duration metric: took 12.816866ms for default service account to be created ...
I1213 09:12:39.724464 392700 system_pods.go:116] waiting for k8s-apps to be running ...
I1213 09:12:39.741228 392700 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1213 09:12:39.741253 392700 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1213 09:12:39.770994 392700 system_pods.go:86] 20 kube-system pods found
I1213 09:12:39.771032 392700 system_pods.go:89] "amd-gpu-device-plugin-pcr8k" [ae35898b-cac4-4c5d-b1f5-3de19fba17ef] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1213 09:12:39.771047 392700 system_pods.go:89] "coredns-66bc5c9577-225xg" [f6715b38-1f5c-45f6-ae76-9c279196f39b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1213 09:12:39.771060 392700 system_pods.go:89] "coredns-66bc5c9577-x9vlt" [e3722310-4cbe-4697-8045-c8353e07f242] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1213 09:12:39.771067 392700 system_pods.go:89] "csi-hostpath-attacher-0" [9d647a4f-c7a0-4cb6-972a-ee1caa579994] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1213 09:12:39.771075 392700 system_pods.go:89] "csi-hostpath-resizer-0" [0f6f307e-148c-4651-b4f6-3f3f1c171223] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1213 09:12:39.771084 392700 system_pods.go:89] "csi-hostpathplugin-lcmz2" [57b68f56-3f72-481e-a5dc-48874663d2b0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1213 09:12:39.771091 392700 system_pods.go:89] "etcd-addons-246361" [9c6ee6ef-dcf1-4eb6-843f-bbe57ee104d0] Running
I1213 09:12:39.771098 392700 system_pods.go:89] "kube-apiserver-addons-246361" [95cfa299-af07-4241-99e6-f974e0615596] Running
I1213 09:12:39.771106 392700 system_pods.go:89] "kube-controller-manager-addons-246361" [598b762c-7498-4f89-8bad-8c38caaf259f] Running
I1213 09:12:39.771111 392700 system_pods.go:89] "kube-ingress-dns-minikube" [3d548ec6-ac97-4b00-a992-cf50e0728d3c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1213 09:12:39.771115 392700 system_pods.go:89] "kube-proxy-f6vpr" [b60db149-95ea-4d92-88d4-958521a5cf75] Running
I1213 09:12:39.771119 392700 system_pods.go:89] "kube-scheduler-addons-246361" [6768514c-186b-4e66-bb4d-e4e91b025fb2] Running
I1213 09:12:39.771123 392700 system_pods.go:89] "metrics-server-85b7d694d7-pglv5" [ce676a7b-70bb-4524-b292-8a00796b0425] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1213 09:12:39.771129 392700 system_pods.go:89] "nvidia-device-plugin-daemonset-ghprj" [64bd87e7-7e06-4465-abb1-e27282853105] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1213 09:12:39.771143 392700 system_pods.go:89] "registry-6b586f9694-4vn9j" [0ffa6230-ba82-4c5a-bfd3-a4c73acdce35] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1213 09:12:39.771153 392700 system_pods.go:89] "registry-creds-764b6fb674-9h8mr" [9b8507b1-f028-4a81-8e59-5773d4e71038] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1213 09:12:39.771161 392700 system_pods.go:89] "registry-proxy-q8xvn" [6c738182-6c24-4d8e-acc8-25d9eae8cfbd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1213 09:12:39.771169 392700 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g7rgv" [bebdd078-f41c-4293-a21f-61f2269782c8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1213 09:12:39.771186 392700 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xps7r" [07924ab0-91ea-41fa-bf06-3b4cc735fdae] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1213 09:12:39.771196 392700 system_pods.go:89] "storage-provisioner" [9b05e28d-a4a6-4e90-af0c-bf01fd93b1e1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1213 09:12:39.771205 392700 system_pods.go:126] duration metric: took 46.73405ms to wait for k8s-apps to be running ...
I1213 09:12:39.771215 392700 system_svc.go:44] waiting for kubelet service to be running ....
I1213 09:12:39.771271 392700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1213 09:12:39.824008 392700 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1213 09:12:39.824035 392700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1213 09:12:39.938872 392700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1213 09:12:40.139210 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:40.139318 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:40.160406 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:40.637579 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:40.639764 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:40.653405 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:41.140286 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:41.142006 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:41.154100 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:41.659408 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:41.666977 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:41.706882 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:41.707590 392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.244812289s)
I1213 09:12:41.707622 392700 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.936324258s)
I1213 09:12:41.707654 392700 system_svc.go:56] duration metric: took 1.93643377s WaitForService to wait for kubelet
I1213 09:12:41.707670 392700 kubeadm.go:587] duration metric: took 12.168901765s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1213 09:12:41.707700 392700 node_conditions.go:102] verifying NodePressure condition ...
I1213 09:12:41.707704 392700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.768793764s)
I1213 09:12:41.708822 392700 addons.go:495] Verifying addon gcp-auth=true in "addons-246361"
I1213 09:12:41.711170 392700 out.go:179] * Verifying gcp-auth addon...
I1213 09:12:41.713116 392700 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1213 09:12:41.732539 392700 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1213 09:12:41.732608 392700 node_conditions.go:123] node cpu capacity is 2
I1213 09:12:41.732637 392700 node_conditions.go:105] duration metric: took 24.930191ms to run NodePressure ...
I1213 09:12:41.732656 392700 start.go:242] waiting for startup goroutines ...
I1213 09:12:41.747891 392700 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1213 09:12:41.747929 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:42.138893 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:42.141690 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:42.157929 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:42.219339 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:42.637038 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:42.637103 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:42.659295 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:42.735695 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:43.137422 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:43.137589 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:43.152264 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:43.219460 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:43.635247 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:43.636314 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:43.655595 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:43.719173 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:44.133309 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:44.136803 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:44.153247 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:44.219809 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:44.638851 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:44.640014 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:44.652718 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:44.725183 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:45.133593 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:45.136464 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:45.155694 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:45.216820 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:45.633873 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:45.633978 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:45.652866 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:45.716526 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:46.137389 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:46.137441 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:46.152335 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:46.237426 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:46.634114 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:46.634108 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:46.652970 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:46.717097 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:47.135279 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:47.136853 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:47.152605 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:47.218060 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:47.633152 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:47.633159 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:47.651956 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:47.719945 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:48.133994 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:48.134541 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:48.154724 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:48.217360 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:48.637246 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:48.637488 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:48.653939 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:48.718287 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:49.136026 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:49.138572 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:49.154830 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:49.222042 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:49.633871 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:49.634758 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:49.652591 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:49.717441 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:50.133119 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:50.134136 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:50.153957 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:50.217526 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:50.637032 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:50.638292 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:50.652212 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:50.719546 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:51.132697 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:51.134911 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:51.153412 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:51.216230 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:51.634427 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:51.634453 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:51.653045 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:51.717637 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:52.132518 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:52.133301 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:52.153671 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:52.216992 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:52.633949 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:52.634621 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:52.653764 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:52.716829 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:53.134230 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:53.134736 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:53.152090 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:53.234461 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:53.638367 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:53.639934 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:53.652859 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:53.717349 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:54.134998 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:54.138551 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:54.154961 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:54.218702 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:54.636288 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:54.637624 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:54.656413 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:54.717989 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:55.144978 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:55.145144 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:55.152803 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:55.222062 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:55.634895 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:55.638084 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:55.654400 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:55.719756 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:56.135967 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:56.136036 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:56.154005 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:56.216806 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:56.637027 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:56.638197 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:56.652922 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:56.720544 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:57.135676 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:57.135998 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:57.153314 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:57.217350 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:57.636027 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:57.636150 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:57.654346 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:57.719682 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:58.134179 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:58.134488 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:58.153218 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:58.218931 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:58.633676 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:58.646837 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:58.659524 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:58.846774 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:59.133773 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:59.138462 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:59.154817 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:59.217360 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:12:59.635172 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:12:59.635452 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:12:59.653150 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:12:59.719394 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:00.140043 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:00.140077 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:13:00.155240 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:00.217238 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:00.632844 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:13:00.634028 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:00.652554 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:00.719012 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:01.135956 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:01.136170 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:13:01.153866 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:01.218388 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:01.633988 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:01.635569 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:13:01.653593 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:01.717088 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:02.133891 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:13:02.134883 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:02.152416 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:02.216310 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:02.633309 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:13:02.633358 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:02.653614 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:02.907546 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:03.138083 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:13:03.138389 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:03.155252 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:03.219188 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:03.636724 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:13:03.636927 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:03.653863 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:03.718355 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:04.133704 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:13:04.133893 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:04.156017 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:04.219716 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:04.634411 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:04.634568 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:13:04.651913 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:04.717220 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:05.133743 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:13:05.133930 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:05.151990 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:05.218427 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:05.635037 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:13:05.635881 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:05.651616 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:05.716683 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:06.139749 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:13:06.139902 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:06.154731 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:06.217561 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:06.636020 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:13:06.636930 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:06.653041 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:06.718913 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:07.135136 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:13:07.135956 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:07.153942 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:07.217659 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:07.633809 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:07.634040 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:13:07.652145 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:07.717939 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:08.133246 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1213 09:13:08.133443 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:08.154595 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:08.216656 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:08.642469 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:08.643951 392700 kapi.go:107] duration metric: took 30.514802697s to wait for kubernetes.io/minikube-addons=registry ...
I1213 09:13:08.655024 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:08.717766 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:09.136867 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:09.152424 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:09.217371 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:09.635119 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:09.654132 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:09.722734 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:10.192962 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:10.194531 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:10.218185 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:10.636725 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:10.736670 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:10.736900 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:11.135314 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:11.153237 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:11.217281 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:11.636555 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:11.654253 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:11.726653 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:12.135121 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:12.153899 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:12.220374 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:12.634901 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:12.653249 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:12.736313 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:13.138621 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:13.154892 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:13.218461 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:13.633945 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:13.653863 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:13.717531 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:14.134430 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:14.153370 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:14.220987 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:14.635625 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:14.653675 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:14.717065 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:15.290476 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:15.294725 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:15.295288 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:15.633097 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:15.652692 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:15.719082 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:16.133941 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:16.151867 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:16.217258 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:16.633243 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:16.652470 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:16.716521 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:17.133005 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:17.153891 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:17.219219 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:17.634474 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:17.653145 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:17.719636 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:18.135146 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:18.154858 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:18.219408 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:18.634755 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:18.654147 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:18.719755 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:19.135426 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:19.153964 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:19.235466 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:19.634565 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:19.653337 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:19.717349 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:20.134431 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:20.152713 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:20.216907 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:20.632379 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:20.651987 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:20.717161 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:21.135375 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:21.151976 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:21.219921 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:21.638965 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:21.655875 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:21.720726 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:22.134920 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:22.155629 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:22.221571 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:22.635492 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:22.656050 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:22.717301 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:23.133519 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:23.152620 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:23.220694 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:23.634660 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:23.651788 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:23.717709 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:24.134370 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:24.154872 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:24.217411 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:24.637897 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:24.655242 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:24.737385 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:25.139561 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:25.152697 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:25.218462 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:25.633396 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:25.652864 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:25.719692 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:26.134257 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:26.154878 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:26.219128 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:26.635889 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:26.652911 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:26.718787 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:27.134254 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:27.152526 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:27.219045 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:27.824440 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:27.824673 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:27.824764 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:28.134337 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:28.152509 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:28.217241 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:28.634935 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:28.652593 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:28.717918 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:29.133064 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:29.153752 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:29.219470 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:29.635380 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:29.654431 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:29.735214 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:30.138785 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:30.157605 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:30.217959 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:30.635195 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:30.654836 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:30.717703 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:31.136745 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:31.236089 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:31.236139 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:31.636436 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:31.653803 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:31.717224 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:32.134632 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:32.235101 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:32.235344 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:32.645522 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:32.652719 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:32.720679 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:33.137718 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:33.156668 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:33.218572 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:33.634174 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:33.653084 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:33.718037 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:34.133279 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:34.154266 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:34.217997 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:34.638043 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:34.652893 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:34.738037 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:35.137396 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:35.156924 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:35.221444 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:35.633900 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:35.653879 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:35.718527 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:36.135422 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:36.154283 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:36.235962 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:36.636593 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:36.652522 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:36.717972 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:37.133524 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:37.155198 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:37.218089 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:37.632895 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:37.652282 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1213 09:13:37.717892 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:38.137885 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:38.152009 392700 kapi.go:107] duration metric: took 58.503521481s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1213 09:13:38.217812 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:38.632433 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:38.716310 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:39.133741 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:39.217319 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:39.634385 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:39.717042 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:40.134071 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:40.218670 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:40.633928 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:40.717426 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:41.134250 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:41.218023 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:41.633705 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:41.716431 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:42.133780 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:42.220119 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:42.633483 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:42.717173 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:43.134213 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:43.233819 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:43.637203 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:43.720025 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:44.136090 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:44.221521 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:44.639036 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:44.719948 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:45.135039 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:45.218899 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:45.633976 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:45.721350 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:46.134138 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:46.217998 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:46.637669 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:46.717179 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:47.136544 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:47.219570 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:47.633528 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:47.718883 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:48.161462 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:48.221832 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:48.634511 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:48.721194 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:49.133791 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:49.218406 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:49.789041 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:49.792013 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:50.133134 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:50.234743 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:50.633314 392700 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1213 09:13:50.716316 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:51.134218 392700 kapi.go:107] duration metric: took 1m13.005080428s to wait for app.kubernetes.io/name=ingress-nginx ...
I1213 09:13:51.216872 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:51.716458 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:52.220557 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:52.717232 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:53.218795 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:53.719651 392700 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1213 09:13:54.217821 392700 kapi.go:107] duration metric: took 1m12.504699283s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1213 09:13:54.219489 392700 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-246361 cluster.
I1213 09:13:54.220708 392700 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1213 09:13:54.221841 392700 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1213 09:13:54.223192 392700 out.go:179] * Enabled addons: cloud-spanner, storage-provisioner, amd-gpu-device-plugin, storage-provisioner-rancher, inspektor-gadget, nvidia-device-plugin, ingress-dns, default-storageclass, registry-creds, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I1213 09:13:54.224371 392700 addons.go:530] duration metric: took 1m24.68552319s for enable addons: enabled=[cloud-spanner storage-provisioner amd-gpu-device-plugin storage-provisioner-rancher inspektor-gadget nvidia-device-plugin ingress-dns default-storageclass registry-creds metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I1213 09:13:54.224419 392700 start.go:247] waiting for cluster config update ...
I1213 09:13:54.224443 392700 start.go:256] writing updated cluster config ...
I1213 09:13:54.224792 392700 ssh_runner.go:195] Run: rm -f paused
I1213 09:13:54.231309 392700 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1213 09:13:54.235252 392700 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x9vlt" in "kube-system" namespace to be "Ready" or be gone ...
I1213 09:13:54.240283 392700 pod_ready.go:94] pod "coredns-66bc5c9577-x9vlt" is "Ready"
I1213 09:13:54.240335 392700 pod_ready.go:86] duration metric: took 5.040196ms for pod "coredns-66bc5c9577-x9vlt" in "kube-system" namespace to be "Ready" or be gone ...
I1213 09:13:54.243055 392700 pod_ready.go:83] waiting for pod "etcd-addons-246361" in "kube-system" namespace to be "Ready" or be gone ...
I1213 09:13:54.248721 392700 pod_ready.go:94] pod "etcd-addons-246361" is "Ready"
I1213 09:13:54.248748 392700 pod_ready.go:86] duration metric: took 5.663324ms for pod "etcd-addons-246361" in "kube-system" namespace to be "Ready" or be gone ...
I1213 09:13:54.251231 392700 pod_ready.go:83] waiting for pod "kube-apiserver-addons-246361" in "kube-system" namespace to be "Ready" or be gone ...
I1213 09:13:54.256167 392700 pod_ready.go:94] pod "kube-apiserver-addons-246361" is "Ready"
I1213 09:13:54.256189 392700 pod_ready.go:86] duration metric: took 4.938995ms for pod "kube-apiserver-addons-246361" in "kube-system" namespace to be "Ready" or be gone ...
I1213 09:13:54.258262 392700 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-246361" in "kube-system" namespace to be "Ready" or be gone ...
I1213 09:13:54.636246 392700 pod_ready.go:94] pod "kube-controller-manager-addons-246361" is "Ready"
I1213 09:13:54.636274 392700 pod_ready.go:86] duration metric: took 377.99103ms for pod "kube-controller-manager-addons-246361" in "kube-system" namespace to be "Ready" or be gone ...
I1213 09:13:54.836613 392700 pod_ready.go:83] waiting for pod "kube-proxy-f6vpr" in "kube-system" namespace to be "Ready" or be gone ...
I1213 09:13:55.236131 392700 pod_ready.go:94] pod "kube-proxy-f6vpr" is "Ready"
I1213 09:13:55.236163 392700 pod_ready.go:86] duration metric: took 399.509399ms for pod "kube-proxy-f6vpr" in "kube-system" namespace to be "Ready" or be gone ...
I1213 09:13:55.436277 392700 pod_ready.go:83] waiting for pod "kube-scheduler-addons-246361" in "kube-system" namespace to be "Ready" or be gone ...
I1213 09:13:55.835225 392700 pod_ready.go:94] pod "kube-scheduler-addons-246361" is "Ready"
I1213 09:13:55.835252 392700 pod_ready.go:86] duration metric: took 398.944175ms for pod "kube-scheduler-addons-246361" in "kube-system" namespace to be "Ready" or be gone ...
I1213 09:13:55.835265 392700 pod_ready.go:40] duration metric: took 1.603895142s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1213 09:13:55.881801 392700 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
I1213 09:13:55.883802 392700 out.go:179] * Done! kubectl is now configured to use "addons-246361" cluster and "default" namespace by default
==> CRI-O <==
Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.177800435Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d18cb8bc-5a90-4410-8af5-65fbf9828e5c name=/runtime.v1.RuntimeService/ListContainers
Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.178378299Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:628516a7d73533daff5dd847807b063734b490b3be17f322069b5862cab3bbda,PodSandboxId:7d574a2f34f68882bdbd41bbed987b533c20016e5acd576de42b45b2f324fd59,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765617259603055897,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6b69c078-1088-484d-990b-d8794ed9b2c6,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e37ee820ac55b9c1336b4d106799b475cfaa12f0a5d71aa35438310e3ce95399,PodSandboxId:dced63ac053305c7768f9cd746ec2d926ee40d250a6a06227d94c76fd66672f3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765617240445757059,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: decad740-d6c4-4453-a6a3-0a9ac1f58430,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3c9248017da5b177fe1f69d3216863ca995f332ee106f91d1d36bccc73dfe7,PodSandboxId:41bffb9ec70750921a60e8e0f102b77b9dfdb3057eecc6c8a33d6cc78e2021d9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765617229970386771,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-w2qnr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8b2f9c0b-2e02-4126-9d1d-c1f045ca6f6b,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:628eb745c903d7a888d0fb1d5f9b057d7d5ada312e38a16b6699fd6395681a02,PodSandboxId:2d56fcdd6385824c310e7be7c766a9e95ca4da7b4a8575f4a4d455d21f2e803f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e
,State:CONTAINER_EXITED,CreatedAt:1765617199761909099,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rtxd5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 153bc1ac-d8e9-4540-b55a-2728ae1974e8,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c912d2dff111e0461477800bb1445315bf3d43661c24a9aa6e2279fc3617b0,PodSandboxId:4505d18199645c770afd301a1ee3881a4007ba99590175cae7dd91ea1410870f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765617199201265517,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6zvn2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f555fef8-9057-4114-af37-9d7365c0bef2,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86b4f2a90f2e3491953330a91cd5ade57ff093679a59bf9569f93a7b6ef247b0,PodSandboxId:4f8767a0b981d20b0bc7e5d2b9a4b04b6bb23a1a44e35e6ac916938b5cb1d481,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765617180774283051,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d548ec6-ac97-4b00-a992-cf50e0728d3c,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89eb96caabf1c75e5905d56d63737b533b45d5de24141cdf765492315cbd1765,PodSandboxId:8b2438312f009f66d4a55e56fba7f01549c3ac03ca6c6148d7686479be1bfe4c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38
35498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765617159210431241,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pcr8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae35898b-cac4-4c5d-b1f5-3de19fba17ef,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4877853c63147ba15265057e2c32a56362db34c6d6bbc67ceff075e7fe08686,PodSandboxId:4dad2a7dc053f19c5407c74f603ff91fc75fcbc6f12138ce3f39a1b46abafd09,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765617158864388340,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b05e28d-a4a6-4e90-af0c-bf01fd93b1e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2012f15f4ade88a5c865283e3de2526fc6c1a98918db531fe20e87e5809f3b2,PodSandboxId:f9a5e2370f1b141a45a09fdaca5db063a4936d5ce229ece30d51420e77101827,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a
9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765617150825692423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-x9vlt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3722310-4cbe-4697-8045-c8353e07f242,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a8082d73b8ece1f67e20c64e2cfab51d335ac46b0b40b55603142da740c91a3,PodSandboxId:ee4e7efa604ef12b36cdd19d812f24148ab40013f8080d40bda6b4383db8b3de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765617150031472376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f6vpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b60db149-95ea-4d92-88d4-958521a5cf75,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c0f467af6def5dd49ebbfbba9a5ba99410764f3415aaf4f3adf2ba77c16191d,PodSandboxId:f74fa673bfef882302672a71d399a5465966cf243a48410013087564e837a849,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765617137916504372,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-246361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397c8ea73c97f3674fbdc97e9d7e7383,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:538894d57d3ca06d99af82f5f05877513892aa26744c920bec59842908f9af2c,PodSandboxId:47e4cfa9e38fae41f767e03875b44103ab0dd7ac0db7ecf0421933ae7d0242f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765617137886876529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-246361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 946988fd2b590065078c2500551ccf5e,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe
-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd984a20ab1f80a715a1ade17f930872c415db4424e9b3a206a11cddff88ed81,PodSandboxId:6af2fba32bd98f8091114ab3194cb5e1527b2788f377063378a7ab77dbe8f666,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765617137903829248,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-246361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb770669e3c9ac4d04b00d62d163fe1c,},Annotations:
map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae132e84c3ae2c02d1dfcf431c4e4d10f6186e4af908262d22d2517a2e18c6b8,PodSandboxId:69ad068c41f7050551ff1f728dffef80fcf60dd8834187e5432815f09eeb554f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765617137874957679,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-246361,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 04942281d89a1eb5c45cc1e401d754fc,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d18cb8bc-5a90-4410-8af5-65fbf9828e5c name=/runtime.v1.RuntimeService/ListContainers
Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.219394155Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=db34723e-5e5b-4842-97b9-18b42576ef9b name=/runtime.v1.RuntimeService/Version
Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.219486745Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db34723e-5e5b-4842-97b9-18b42576ef9b name=/runtime.v1.RuntimeService/Version
Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.221167257Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0bcd6100-0793-4c80-afa2-fae84284a4ea name=/runtime.v1.ImageService/ImageFsInfo
Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.223364608Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765617404223333899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0bcd6100-0793-4c80-afa2-fae84284a4ea name=/runtime.v1.ImageService/ImageFsInfo
Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.224300248Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=388720cb-c834-48e3-8144-e0da1d6e59b4 name=/runtime.v1.RuntimeService/ListContainers
Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.224368126Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=388720cb-c834-48e3-8144-e0da1d6e59b4 name=/runtime.v1.RuntimeService/ListContainers
Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.224889244Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:628516a7d73533daff5dd847807b063734b490b3be17f322069b5862cab3bbda,PodSandboxId:7d574a2f34f68882bdbd41bbed987b533c20016e5acd576de42b45b2f324fd59,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765617259603055897,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6b69c078-1088-484d-990b-d8794ed9b2c6,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e37ee820ac55b9c1336b4d106799b475cfaa12f0a5d71aa35438310e3ce95399,PodSandboxId:dced63ac053305c7768f9cd746ec2d926ee40d250a6a06227d94c76fd66672f3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765617240445757059,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: decad740-d6c4-4453-a6a3-0a9ac1f58430,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3c9248017da5b177fe1f69d3216863ca995f332ee106f91d1d36bccc73dfe7,PodSandboxId:41bffb9ec70750921a60e8e0f102b77b9dfdb3057eecc6c8a33d6cc78e2021d9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765617229970386771,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-w2qnr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8b2f9c0b-2e02-4126-9d1d-c1f045ca6f6b,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:628eb745c903d7a888d0fb1d5f9b057d7d5ada312e38a16b6699fd6395681a02,PodSandboxId:2d56fcdd6385824c310e7be7c766a9e95ca4da7b4a8575f4a4d455d21f2e803f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e
,State:CONTAINER_EXITED,CreatedAt:1765617199761909099,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rtxd5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 153bc1ac-d8e9-4540-b55a-2728ae1974e8,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c912d2dff111e0461477800bb1445315bf3d43661c24a9aa6e2279fc3617b0,PodSandboxId:4505d18199645c770afd301a1ee3881a4007ba99590175cae7dd91ea1410870f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765617199201265517,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6zvn2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f555fef8-9057-4114-af37-9d7365c0bef2,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86b4f2a90f2e3491953330a91cd5ade57ff093679a59bf9569f93a7b6ef247b0,PodSandboxId:4f8767a0b981d20b0bc7e5d2b9a4b04b6bb23a1a44e35e6ac916938b5cb1d481,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765617180774283051,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d548ec6-ac97-4b00-a992-cf50e0728d3c,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89eb96caabf1c75e5905d56d63737b533b45d5de24141cdf765492315cbd1765,PodSandboxId:8b2438312f009f66d4a55e56fba7f01549c3ac03ca6c6148d7686479be1bfe4c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38
35498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765617159210431241,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pcr8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae35898b-cac4-4c5d-b1f5-3de19fba17ef,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4877853c63147ba15265057e2c32a56362db34c6d6bbc67ceff075e7fe08686,PodSandboxId:4dad2a7dc053f19c5407c74f603ff91fc75fcbc6f12138ce3f39a1b46abafd09,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765617158864388340,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b05e28d-a4a6-4e90-af0c-bf01fd93b1e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2012f15f4ade88a5c865283e3de2526fc6c1a98918db531fe20e87e5809f3b2,PodSandboxId:f9a5e2370f1b141a45a09fdaca5db063a4936d5ce229ece30d51420e77101827,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a
9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765617150825692423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-x9vlt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3722310-4cbe-4697-8045-c8353e07f242,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a8082d73b8ece1f67e20c64e2cfab51d335ac46b0b40b55603142da740c91a3,PodSandboxId:ee4e7efa604ef12b36cdd19d812f24148ab40013f8080d40bda6b4383db8b3de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765617150031472376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f6vpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b60db149-95ea-4d92-88d4-958521a5cf75,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c0f467af6def5dd49ebbfbba9a5ba99410764f3415aaf4f3adf2ba77c16191d,PodSandboxId:f74fa673bfef882302672a71d399a5465966cf243a48410013087564e837a849,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765617137916504372,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-246361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397c8ea73c97f3674fbdc97e9d7e7383,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:538894d57d3ca06d99af82f5f05877513892aa26744c920bec59842908f9af2c,PodSandboxId:47e4cfa9e38fae41f767e03875b44103ab0dd7ac0db7ecf0421933ae7d0242f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765617137886876529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-246361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 946988fd2b590065078c2500551ccf5e,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe
-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd984a20ab1f80a715a1ade17f930872c415db4424e9b3a206a11cddff88ed81,PodSandboxId:6af2fba32bd98f8091114ab3194cb5e1527b2788f377063378a7ab77dbe8f666,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765617137903829248,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-246361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb770669e3c9ac4d04b00d62d163fe1c,},Annotations:
map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae132e84c3ae2c02d1dfcf431c4e4d10f6186e4af908262d22d2517a2e18c6b8,PodSandboxId:69ad068c41f7050551ff1f728dffef80fcf60dd8834187e5432815f09eeb554f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765617137874957679,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-246361,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 04942281d89a1eb5c45cc1e401d754fc,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=388720cb-c834-48e3-8144-e0da1d6e59b4 name=/runtime.v1.RuntimeService/ListContainers
Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.256502648Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6eab6c19-d33d-43e2-ac53-a421217a1007 name=/runtime.v1.RuntimeService/Version
Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.256789277Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6eab6c19-d33d-43e2-ac53-a421217a1007 name=/runtime.v1.RuntimeService/Version
Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.258249521Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=672023a3-fbe2-4793-b965-2c789c115278 name=/runtime.v1.ImageService/ImageFsInfo
Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.259583657Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765617404259557391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=672023a3-fbe2-4793-b965-2c789c115278 name=/runtime.v1.ImageService/ImageFsInfo
Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.260528832Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0e260f3-671f-430f-bc10-72d1155b5fb1 name=/runtime.v1.RuntimeService/ListContainers
Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.260601361Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0e260f3-671f-430f-bc10-72d1155b5fb1 name=/runtime.v1.RuntimeService/ListContainers
Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.260933642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:628516a7d73533daff5dd847807b063734b490b3be17f322069b5862cab3bbda,PodSandboxId:7d574a2f34f68882bdbd41bbed987b533c20016e5acd576de42b45b2f324fd59,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765617259603055897,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6b69c078-1088-484d-990b-d8794ed9b2c6,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e37ee820ac55b9c1336b4d106799b475cfaa12f0a5d71aa35438310e3ce95399,PodSandboxId:dced63ac053305c7768f9cd746ec2d926ee40d250a6a06227d94c76fd66672f3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765617240445757059,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: decad740-d6c4-4453-a6a3-0a9ac1f58430,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3c9248017da5b177fe1f69d3216863ca995f332ee106f91d1d36bccc73dfe7,PodSandboxId:41bffb9ec70750921a60e8e0f102b77b9dfdb3057eecc6c8a33d6cc78e2021d9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765617229970386771,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-w2qnr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8b2f9c0b-2e02-4126-9d1d-c1f045ca6f6b,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:628eb745c903d7a888d0fb1d5f9b057d7d5ada312e38a16b6699fd6395681a02,PodSandboxId:2d56fcdd6385824c310e7be7c766a9e95ca4da7b4a8575f4a4d455d21f2e803f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e
,State:CONTAINER_EXITED,CreatedAt:1765617199761909099,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rtxd5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 153bc1ac-d8e9-4540-b55a-2728ae1974e8,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c912d2dff111e0461477800bb1445315bf3d43661c24a9aa6e2279fc3617b0,PodSandboxId:4505d18199645c770afd301a1ee3881a4007ba99590175cae7dd91ea1410870f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765617199201265517,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6zvn2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f555fef8-9057-4114-af37-9d7365c0bef2,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86b4f2a90f2e3491953330a91cd5ade57ff093679a59bf9569f93a7b6ef247b0,PodSandboxId:4f8767a0b981d20b0bc7e5d2b9a4b04b6bb23a1a44e35e6ac916938b5cb1d481,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765617180774283051,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d548ec6-ac97-4b00-a992-cf50e0728d3c,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89eb96caabf1c75e5905d56d63737b533b45d5de24141cdf765492315cbd1765,PodSandboxId:8b2438312f009f66d4a55e56fba7f01549c3ac03ca6c6148d7686479be1bfe4c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38
35498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765617159210431241,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pcr8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae35898b-cac4-4c5d-b1f5-3de19fba17ef,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4877853c63147ba15265057e2c32a56362db34c6d6bbc67ceff075e7fe08686,PodSandboxId:4dad2a7dc053f19c5407c74f603ff91fc75fcbc6f12138ce3f39a1b46abafd09,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765617158864388340,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b05e28d-a4a6-4e90-af0c-bf01fd93b1e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2012f15f4ade88a5c865283e3de2526fc6c1a98918db531fe20e87e5809f3b2,PodSandboxId:f9a5e2370f1b141a45a09fdaca5db063a4936d5ce229ece30d51420e77101827,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a
9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765617150825692423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-x9vlt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3722310-4cbe-4697-8045-c8353e07f242,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a8082d73b8ece1f67e20c64e2cfab51d335ac46b0b40b55603142da740c91a3,PodSandboxId:ee4e7efa604ef12b36cdd19d812f24148ab40013f8080d40bda6b4383db8b3de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765617150031472376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f6vpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b60db149-95ea-4d92-88d4-958521a5cf75,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c0f467af6def5dd49ebbfbba9a5ba99410764f3415aaf4f3adf2ba77c16191d,PodSandboxId:f74fa673bfef882302672a71d399a5465966cf243a48410013087564e837a849,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765617137916504372,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-246361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397c8ea73c97f3674fbdc97e9d7e7383,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:538894d57d3ca06d99af82f5f05877513892aa26744c920bec59842908f9af2c,PodSandboxId:47e4cfa9e38fae41f767e03875b44103ab0dd7ac0db7ecf0421933ae7d0242f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765617137886876529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-246361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 946988fd2b590065078c2500551ccf5e,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe
-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd984a20ab1f80a715a1ade17f930872c415db4424e9b3a206a11cddff88ed81,PodSandboxId:6af2fba32bd98f8091114ab3194cb5e1527b2788f377063378a7ab77dbe8f666,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765617137903829248,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-246361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb770669e3c9ac4d04b00d62d163fe1c,},Annotations:
map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae132e84c3ae2c02d1dfcf431c4e4d10f6186e4af908262d22d2517a2e18c6b8,PodSandboxId:69ad068c41f7050551ff1f728dffef80fcf60dd8834187e5432815f09eeb554f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765617137874957679,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-246361,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 04942281d89a1eb5c45cc1e401d754fc,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d0e260f3-671f-430f-bc10-72d1155b5fb1 name=/runtime.v1.RuntimeService/ListContainers
Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.295266087Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.295562282Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.296638307Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1a8f9797-b27a-46ff-b2f8-fd837004fddf name=/runtime.v1.RuntimeService/Version
Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.296822479Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1a8f9797-b27a-46ff-b2f8-fd837004fddf name=/runtime.v1.RuntimeService/Version
Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.298282951Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2de3a8f-da6c-43eb-9150-85dd8ad12e8e name=/runtime.v1.ImageService/ImageFsInfo
Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.300158211Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765617404300117951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2de3a8f-da6c-43eb-9150-85dd8ad12e8e name=/runtime.v1.ImageService/ImageFsInfo
Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.301460934Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9367a39c-4fa1-4b13-af73-160c6fb198df name=/runtime.v1.RuntimeService/ListContainers
Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.301559407Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9367a39c-4fa1-4b13-af73-160c6fb198df name=/runtime.v1.RuntimeService/ListContainers
Dec 13 09:16:44 addons-246361 crio[815]: time="2025-12-13 09:16:44.302061491Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:628516a7d73533daff5dd847807b063734b490b3be17f322069b5862cab3bbda,PodSandboxId:7d574a2f34f68882bdbd41bbed987b533c20016e5acd576de42b45b2f324fd59,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765617259603055897,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6b69c078-1088-484d-990b-d8794ed9b2c6,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e37ee820ac55b9c1336b4d106799b475cfaa12f0a5d71aa35438310e3ce95399,PodSandboxId:dced63ac053305c7768f9cd746ec2d926ee40d250a6a06227d94c76fd66672f3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765617240445757059,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: decad740-d6c4-4453-a6a3-0a9ac1f58430,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e3c9248017da5b177fe1f69d3216863ca995f332ee106f91d1d36bccc73dfe7,PodSandboxId:41bffb9ec70750921a60e8e0f102b77b9dfdb3057eecc6c8a33d6cc78e2021d9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765617229970386771,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-w2qnr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8b2f9c0b-2e02-4126-9d1d-c1f045ca6f6b,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:628eb745c903d7a888d0fb1d5f9b057d7d5ada312e38a16b6699fd6395681a02,PodSandboxId:2d56fcdd6385824c310e7be7c766a9e95ca4da7b4a8575f4a4d455d21f2e803f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e
,State:CONTAINER_EXITED,CreatedAt:1765617199761909099,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rtxd5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 153bc1ac-d8e9-4540-b55a-2728ae1974e8,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c912d2dff111e0461477800bb1445315bf3d43661c24a9aa6e2279fc3617b0,PodSandboxId:4505d18199645c770afd301a1ee3881a4007ba99590175cae7dd91ea1410870f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765617199201265517,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6zvn2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f555fef8-9057-4114-af37-9d7365c0bef2,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86b4f2a90f2e3491953330a91cd5ade57ff093679a59bf9569f93a7b6ef247b0,PodSandboxId:4f8767a0b981d20b0bc7e5d2b9a4b04b6bb23a1a44e35e6ac916938b5cb1d481,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765617180774283051,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d548ec6-ac97-4b00-a992-cf50e0728d3c,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89eb96caabf1c75e5905d56d63737b533b45d5de24141cdf765492315cbd1765,PodSandboxId:8b2438312f009f66d4a55e56fba7f01549c3ac03ca6c6148d7686479be1bfe4c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38
35498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765617159210431241,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pcr8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae35898b-cac4-4c5d-b1f5-3de19fba17ef,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4877853c63147ba15265057e2c32a56362db34c6d6bbc67ceff075e7fe08686,PodSandboxId:4dad2a7dc053f19c5407c74f603ff91fc75fcbc6f12138ce3f39a1b46abafd09,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765617158864388340,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b05e28d-a4a6-4e90-af0c-bf01fd93b1e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2012f15f4ade88a5c865283e3de2526fc6c1a98918db531fe20e87e5809f3b2,PodSandboxId:f9a5e2370f1b141a45a09fdaca5db063a4936d5ce229ece30d51420e77101827,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a
9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765617150825692423,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-x9vlt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3722310-4cbe-4697-8045-c8353e07f242,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a8082d73b8ece1f67e20c64e2cfab51d335ac46b0b40b55603142da740c91a3,PodSandboxId:ee4e7efa604ef12b36cdd19d812f24148ab40013f8080d40bda6b4383db8b3de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765617150031472376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f6vpr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b60db149-95ea-4d92-88d4-958521a5cf75,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c0f467af6def5dd49ebbfbba9a5ba99410764f3415aaf4f3adf2ba77c16191d,PodSandboxId:f74fa673bfef882302672a71d399a5465966cf243a48410013087564e837a849,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765617137916504372,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-246361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397c8ea73c97f3674fbdc97e9d7e7383,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:538894d57d3ca06d99af82f5f05877513892aa26744c920bec59842908f9af2c,PodSandboxId:47e4cfa9e38fae41f767e03875b44103ab0dd7ac0db7ecf0421933ae7d0242f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765617137886876529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-246361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 946988fd2b590065078c2500551ccf5e,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe
-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd984a20ab1f80a715a1ade17f930872c415db4424e9b3a206a11cddff88ed81,PodSandboxId:6af2fba32bd98f8091114ab3194cb5e1527b2788f377063378a7ab77dbe8f666,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765617137903829248,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-246361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb770669e3c9ac4d04b00d62d163fe1c,},Annotations:
map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae132e84c3ae2c02d1dfcf431c4e4d10f6186e4af908262d22d2517a2e18c6b8,PodSandboxId:69ad068c41f7050551ff1f728dffef80fcf60dd8834187e5432815f09eeb554f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765617137874957679,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-246361,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 04942281d89a1eb5c45cc1e401d754fc,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9367a39c-4fa1-4b13-af73-160c6fb198df name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
628516a7d7353 public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff 2 minutes ago Running nginx 0 7d574a2f34f68 nginx default
e37ee820ac55b gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 2 minutes ago Running busybox 0 dced63ac05330 busybox default
5e3c9248017da registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad 2 minutes ago Running controller 0 41bffb9ec7075 ingress-nginx-controller-85d4c799dd-w2qnr ingress-nginx
628eb745c903d a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e 3 minutes ago Exited patch 1 2d56fcdd63858 ingress-nginx-admission-patch-rtxd5 ingress-nginx
73c912d2dff11 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285 3 minutes ago Exited create 0 4505d18199645 ingress-nginx-admission-create-6zvn2 ingress-nginx
86b4f2a90f2e3 docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 3 minutes ago Running minikube-ingress-dns 0 4f8767a0b981d kube-ingress-dns-minikube kube-system
89eb96caabf1c docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 8b2438312f009 amd-gpu-device-plugin-pcr8k kube-system
a4877853c6314 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 4dad2a7dc053f storage-provisioner kube-system
d2012f15f4ade 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969 4 minutes ago Running coredns 0 f9a5e2370f1b1 coredns-66bc5c9577-x9vlt kube-system
7a8082d73b8ec 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45 4 minutes ago Running kube-proxy 0 ee4e7efa604ef kube-proxy-f6vpr kube-system
1c0f467af6def 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952 4 minutes ago Running kube-scheduler 0 f74fa673bfef8 kube-scheduler-addons-246361 kube-system
fd984a20ab1f8 a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85 4 minutes ago Running kube-apiserver 0 6af2fba32bd98 kube-apiserver-addons-246361 kube-system
538894d57d3ca 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8 4 minutes ago Running kube-controller-manager 0 47e4cfa9e38fa kube-controller-manager-addons-246361 kube-system
ae132e84c3ae2 a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1 4 minutes ago Running etcd 0 69ad068c41f70 etcd-addons-246361 kube-system
==> coredns [d2012f15f4ade88a5c865283e3de2526fc6c1a98918db531fe20e87e5809f3b2] <==
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
[INFO] Reloading complete
[INFO] 127.0.0.1:39775 - 62444 "HINFO IN 6417589294913946888.7430898304822193385. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.095304032s
[INFO] 10.244.0.23:39738 - 8152 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000372247s
[INFO] 10.244.0.23:41562 - 30552 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002697213s
[INFO] 10.244.0.23:45553 - 57206 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000202551s
[INFO] 10.244.0.23:37181 - 44543 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000262199s
[INFO] 10.244.0.23:43344 - 20817 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000138134s
[INFO] 10.244.0.23:41976 - 9043 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000174261s
[INFO] 10.244.0.23:58992 - 61906 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001148837s
[INFO] 10.244.0.23:47672 - 7753 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.003439575s
[INFO] 10.244.0.27:59950 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000556418s
[INFO] 10.244.0.27:49848 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00015588s
==> describe nodes <==
Name: addons-246361
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-246361
kubernetes.io/os=linux
minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b
minikube.k8s.io/name=addons-246361
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_13T09_12_24_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-246361
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 13 Dec 2025 09:12:21 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-246361
AcquireTime: <unset>
RenewTime: Sat, 13 Dec 2025 09:16:39 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 13 Dec 2025 09:14:57 +0000 Sat, 13 Dec 2025 09:12:18 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 13 Dec 2025 09:14:57 +0000 Sat, 13 Dec 2025 09:12:18 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 13 Dec 2025 09:14:57 +0000 Sat, 13 Dec 2025 09:12:18 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 13 Dec 2025 09:14:57 +0000 Sat, 13 Dec 2025 09:12:25 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.185
Hostname: addons-246361
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
System Info:
Machine ID: 27894c69ae154bb1a7622eea43d7ca9d
System UUID: 27894c69-ae15-4bb1-a762-2eea43d7ca9d
Boot ID: 7ded0609-6263-48ca-9a1f-2025ab0ab76a
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.34.2
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (13 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m48s
default hello-world-app-5d498dc89-9kxwk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 1s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m29s
ingress-nginx ingress-nginx-controller-85d4c799dd-w2qnr 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m7s
kube-system amd-gpu-device-plugin-pcr8k 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m12s
kube-system coredns-66bc5c9577-x9vlt 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m15s
kube-system etcd-addons-246361 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m20s
kube-system kube-apiserver-addons-246361 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m20s
kube-system kube-controller-manager-addons-246361 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m20s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m9s
kube-system kube-proxy-f6vpr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m15s
kube-system kube-scheduler-addons-246361 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m20s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m9s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m13s kube-proxy
Normal Starting 4m27s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m27s (x8 over 4m27s) kubelet Node addons-246361 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m27s (x8 over 4m27s) kubelet Node addons-246361 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m27s (x7 over 4m27s) kubelet Node addons-246361 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m27s kubelet Updated Node Allocatable limit across pods
Normal Starting 4m20s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m20s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m20s kubelet Node addons-246361 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m20s kubelet Node addons-246361 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m20s kubelet Node addons-246361 status is now: NodeHasSufficientPID
Normal NodeReady 4m19s kubelet Node addons-246361 status is now: NodeReady
Normal RegisteredNode 4m16s node-controller Node addons-246361 event: Registered Node addons-246361 in Controller
==> dmesg <==
[ +0.816818] kauditd_printk_skb: 387 callbacks suppressed
[ +5.459264] kauditd_printk_skb: 7 callbacks suppressed
[Dec13 09:13] kauditd_printk_skb: 11 callbacks suppressed
[ +7.259680] kauditd_printk_skb: 26 callbacks suppressed
[ +7.332277] kauditd_printk_skb: 38 callbacks suppressed
[ +5.373178] kauditd_printk_skb: 146 callbacks suppressed
[ +4.239429] kauditd_printk_skb: 52 callbacks suppressed
[ +6.020570] kauditd_printk_skb: 95 callbacks suppressed
[ +4.789663] kauditd_printk_skb: 96 callbacks suppressed
[ +0.000924] kauditd_printk_skb: 44 callbacks suppressed
[ +5.349381] kauditd_printk_skb: 53 callbacks suppressed
[Dec13 09:14] kauditd_printk_skb: 47 callbacks suppressed
[ +9.383074] kauditd_printk_skb: 17 callbacks suppressed
[ +5.590573] kauditd_printk_skb: 22 callbacks suppressed
[ +5.813482] kauditd_printk_skb: 95 callbacks suppressed
[ +0.000041] kauditd_printk_skb: 49 callbacks suppressed
[ +0.445654] kauditd_printk_skb: 117 callbacks suppressed
[ +4.456361] kauditd_printk_skb: 98 callbacks suppressed
[ +1.809988] kauditd_printk_skb: 113 callbacks suppressed
[ +0.000821] kauditd_printk_skb: 106 callbacks suppressed
[Dec13 09:15] kauditd_printk_skb: 41 callbacks suppressed
[ +0.000047] kauditd_printk_skb: 10 callbacks suppressed
[ +5.273519] kauditd_printk_skb: 41 callbacks suppressed
[ +0.479298] kauditd_printk_skb: 130 callbacks suppressed
[Dec13 09:16] kauditd_printk_skb: 7 callbacks suppressed
==> etcd [ae132e84c3ae2c02d1dfcf431c4e4d10f6186e4af908262d22d2517a2e18c6b8] <==
{"level":"warn","ts":"2025-12-13T09:13:27.811921Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"313.795313ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-13T09:13:27.812077Z","caller":"traceutil/trace.go:172","msg":"trace[516222104] range","detail":"{range_begin:/registry/secrets; range_end:; response_count:0; response_revision:1077; }","duration":"313.959656ms","start":"2025-12-13T09:13:27.498111Z","end":"2025-12-13T09:13:27.812070Z","steps":["trace[516222104] 'agreement among raft nodes before linearized reading' (duration: 313.560605ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-13T09:13:27.812155Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-13T09:13:27.498096Z","time spent":"314.051565ms","remote":"127.0.0.1:51888","response type":"/etcdserverpb.KV/Range","request count":0,"request size":21,"response count":0,"response size":29,"request content":"key:\"/registry/secrets\" limit:1 "}
{"level":"warn","ts":"2025-12-13T09:13:27.812405Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-13T09:13:27.445045Z","time spent":"366.695857ms","remote":"127.0.0.1:51994","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1066 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
{"level":"warn","ts":"2025-12-13T09:13:27.812582Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"228.909455ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-13T09:13:27.812617Z","caller":"traceutil/trace.go:172","msg":"trace[1456460595] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:1077; }","duration":"228.945387ms","start":"2025-12-13T09:13:27.583666Z","end":"2025-12-13T09:13:27.812611Z","steps":["trace[1456460595] 'agreement among raft nodes before linearized reading' (duration: 228.891456ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-13T09:13:27.812645Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.428717ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"warn","ts":"2025-12-13T09:13:27.812750Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.396624ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-13T09:13:27.812762Z","caller":"traceutil/trace.go:172","msg":"trace[833198630] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1077; }","duration":"101.547932ms","start":"2025-12-13T09:13:27.711208Z","end":"2025-12-13T09:13:27.812756Z","steps":["trace[833198630] 'agreement among raft nodes before linearized reading' (duration: 101.412128ms)"],"step_count":1}
{"level":"info","ts":"2025-12-13T09:13:27.812787Z","caller":"traceutil/trace.go:172","msg":"trace[1769544653] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1077; }","duration":"151.412387ms","start":"2025-12-13T09:13:27.661349Z","end":"2025-12-13T09:13:27.812761Z","steps":["trace[1769544653] 'agreement among raft nodes before linearized reading' (duration: 151.385002ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-13T09:13:27.812874Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"165.865002ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-13T09:13:27.812906Z","caller":"traceutil/trace.go:172","msg":"trace[142762208] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1077; }","duration":"165.896582ms","start":"2025-12-13T09:13:27.647004Z","end":"2025-12-13T09:13:27.812901Z","steps":["trace[142762208] 'agreement among raft nodes before linearized reading' (duration: 165.856364ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-13T09:13:27.812957Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"185.329829ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-13T09:13:27.812983Z","caller":"traceutil/trace.go:172","msg":"trace[2056939821] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1077; }","duration":"185.35643ms","start":"2025-12-13T09:13:27.627623Z","end":"2025-12-13T09:13:27.812979Z","steps":["trace[2056939821] 'agreement among raft nodes before linearized reading' (duration: 185.319823ms)"],"step_count":1}
{"level":"info","ts":"2025-12-13T09:13:31.951857Z","caller":"traceutil/trace.go:172","msg":"trace[1920715489] transaction","detail":"{read_only:false; response_revision:1106; number_of_response:1; }","duration":"114.940122ms","start":"2025-12-13T09:13:31.836904Z","end":"2025-12-13T09:13:31.951844Z","steps":["trace[1920715489] 'process raft request' (duration: 114.834497ms)"],"step_count":1}
{"level":"info","ts":"2025-12-13T09:13:49.782448Z","caller":"traceutil/trace.go:172","msg":"trace[865306237] linearizableReadLoop","detail":"{readStateIndex:1213; appliedIndex:1213; }","duration":"154.451185ms","start":"2025-12-13T09:13:49.627964Z","end":"2025-12-13T09:13:49.782415Z","steps":["trace[865306237] 'read index received' (duration: 154.446026ms)","trace[865306237] 'applied index is now lower than readState.Index' (duration: 4.566µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-13T09:13:49.782666Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.6615ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-13T09:13:49.782685Z","caller":"traceutil/trace.go:172","msg":"trace[949296125] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1179; }","duration":"154.718895ms","start":"2025-12-13T09:13:49.627961Z","end":"2025-12-13T09:13:49.782680Z","steps":["trace[949296125] 'agreement among raft nodes before linearized reading' (duration: 154.633501ms)"],"step_count":1}
{"level":"info","ts":"2025-12-13T09:13:49.785905Z","caller":"traceutil/trace.go:172","msg":"trace[571349889] transaction","detail":"{read_only:false; response_revision:1180; number_of_response:1; }","duration":"162.545309ms","start":"2025-12-13T09:13:49.623347Z","end":"2025-12-13T09:13:49.785892Z","steps":["trace[571349889] 'process raft request' (duration: 159.543384ms)"],"step_count":1}
{"level":"info","ts":"2025-12-13T09:14:25.180461Z","caller":"traceutil/trace.go:172","msg":"trace[749875535] linearizableReadLoop","detail":"{readStateIndex:1453; appliedIndex:1453; }","duration":"283.59722ms","start":"2025-12-13T09:14:24.896844Z","end":"2025-12-13T09:14:25.180441Z","steps":["trace[749875535] 'read index received' (duration: 283.544319ms)","trace[749875535] 'applied index is now lower than readState.Index' (duration: 6.215µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-13T09:14:25.180637Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"283.771537ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-13T09:14:25.180661Z","caller":"traceutil/trace.go:172","msg":"trace[525254140] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1412; }","duration":"283.8178ms","start":"2025-12-13T09:14:24.896838Z","end":"2025-12-13T09:14:25.180656Z","steps":["trace[525254140] 'agreement among raft nodes before linearized reading' (duration: 283.748292ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-13T09:14:25.181960Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.420483ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/registry-6b586f9694-4vn9j.1880bb7299047c65\" limit:1 ","response":"range_response_count:1 size:826"}
{"level":"info","ts":"2025-12-13T09:14:25.182012Z","caller":"traceutil/trace.go:172","msg":"trace[1755167697] range","detail":"{range_begin:/registry/events/kube-system/registry-6b586f9694-4vn9j.1880bb7299047c65; range_end:; response_count:1; response_revision:1413; }","duration":"131.48138ms","start":"2025-12-13T09:14:25.050522Z","end":"2025-12-13T09:14:25.182003Z","steps":["trace[1755167697] 'agreement among raft nodes before linearized reading' (duration: 131.349296ms)"],"step_count":1}
{"level":"info","ts":"2025-12-13T09:14:25.182634Z","caller":"traceutil/trace.go:172","msg":"trace[662986795] transaction","detail":"{read_only:false; response_revision:1413; number_of_response:1; }","duration":"290.398267ms","start":"2025-12-13T09:14:24.892222Z","end":"2025-12-13T09:14:25.182620Z","steps":["trace[662986795] 'process raft request' (duration: 289.543981ms)"],"step_count":1}
==> kernel <==
09:16:44 up 4 min, 0 users, load average: 0.27, 0.85, 0.45
Linux addons-246361 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 11 23:11:39 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [fd984a20ab1f80a715a1ade17f930872c415db4424e9b3a206a11cddff88ed81] <==
E1213 09:13:11.748227 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.156.47:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.156.47:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.156.47:443: connect: connection refused" logger="UnhandledError"
E1213 09:13:11.752619 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.156.47:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.156.47:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.156.47:443: connect: connection refused" logger="UnhandledError"
I1213 09:13:11.859777 1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E1213 09:14:06.679631 1 conn.go:339] Error on socket receive: read tcp 192.168.39.185:8443->192.168.39.1:46614: use of closed network connection
E1213 09:14:06.868602 1 conn.go:339] Error on socket receive: read tcp 192.168.39.185:8443->192.168.39.1:46650: use of closed network connection
I1213 09:14:15.470086 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
I1213 09:14:15.684514 1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.52.103"}
I1213 09:14:16.245950 1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.226.2"}
E1213 09:15:01.166250 1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
I1213 09:15:02.806230 1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I1213 09:15:12.773571 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I1213 09:15:18.697363 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1213 09:15:18.697526 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1213 09:15:18.729081 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1213 09:15:18.729638 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1213 09:15:18.738225 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1213 09:15:18.738323 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1213 09:15:18.771754 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1213 09:15:18.771832 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1213 09:15:18.840855 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1213 09:15:18.840900 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W1213 09:15:19.738278 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W1213 09:15:19.841231 1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
W1213 09:15:19.914316 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
I1213 09:16:43.149037 1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.251.12"}
==> kube-controller-manager [538894d57d3ca06d99af82f5f05877513892aa26744c920bec59842908f9af2c] <==
I1213 09:15:28.376196 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
E1213 09:15:28.594099 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 09:15:28.595248 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 09:15:28.870451 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 09:15:28.871610 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 09:15:29.932786 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 09:15:29.933935 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 09:15:35.222638 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 09:15:35.223651 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 09:15:36.653149 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 09:15:36.654449 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 09:15:40.917795 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 09:15:40.918826 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 09:15:56.597011 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 09:15:56.598093 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 09:15:59.780816 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 09:15:59.784555 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 09:16:00.325325 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 09:16:00.326347 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 09:16:25.415503 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 09:16:25.416992 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 09:16:43.012743 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 09:16:43.014833 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1213 09:16:43.164164 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1213 09:16:43.168465 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
==> kube-proxy [7a8082d73b8ece1f67e20c64e2cfab51d335ac46b0b40b55603142da740c91a3] <==
I1213 09:12:30.979101 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1213 09:12:31.186975 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1213 09:12:31.193535 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.185"]
E1213 09:12:31.195598 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1213 09:12:31.525594 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1213 09:12:31.525644 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1213 09:12:31.525668 1 server_linux.go:132] "Using iptables Proxier"
I1213 09:12:31.543959 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1213 09:12:31.545104 1 server.go:527] "Version info" version="v1.34.2"
I1213 09:12:31.545145 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1213 09:12:31.556630 1 config.go:200] "Starting service config controller"
I1213 09:12:31.557244 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1213 09:12:31.557274 1 config.go:106] "Starting endpoint slice config controller"
I1213 09:12:31.557286 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1213 09:12:31.557296 1 config.go:403] "Starting serviceCIDR config controller"
I1213 09:12:31.557300 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1213 09:12:31.559188 1 config.go:309] "Starting node config controller"
I1213 09:12:31.559239 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1213 09:12:31.559257 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1213 09:12:31.658111 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1213 09:12:31.658792 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1213 09:12:31.659837 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-scheduler [1c0f467af6def5dd49ebbfbba9a5ba99410764f3415aaf4f3adf2ba77c16191d] <==
E1213 09:12:21.220530 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1213 09:12:21.220578 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1213 09:12:21.220790 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1213 09:12:21.221017 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1213 09:12:21.221101 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1213 09:12:21.221144 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1213 09:12:21.224675 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1213 09:12:21.224896 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1213 09:12:21.224947 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1213 09:12:21.224983 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1213 09:12:22.082860 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1213 09:12:22.097331 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1213 09:12:22.110250 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1213 09:12:22.117022 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1213 09:12:22.163270 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1213 09:12:22.183246 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1213 09:12:22.190479 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1213 09:12:22.224060 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1213 09:12:22.239668 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1213 09:12:22.284480 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1213 09:12:22.344466 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1213 09:12:22.370879 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1213 09:12:22.387559 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1213 09:12:22.753373 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
I1213 09:12:25.211650 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Dec 13 09:15:22 addons-246361 kubelet[1502]: I1213 09:15:22.049199 1502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bebdd078-f41c-4293-a21f-61f2269782c8" path="/var/lib/kubelet/pods/bebdd078-f41c-4293-a21f-61f2269782c8/volumes"
Dec 13 09:15:24 addons-246361 kubelet[1502]: E1213 09:15:24.374010 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765617324372260704 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 09:15:24 addons-246361 kubelet[1502]: E1213 09:15:24.374131 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765617324372260704 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 09:15:26 addons-246361 kubelet[1502]: I1213 09:15:26.056631 1502 scope.go:117] "RemoveContainer" containerID="bb4911f9d799ec7c39a154e01ac52d6cf318e1e6525d17f41396588b795c3a4b"
Dec 13 09:15:26 addons-246361 kubelet[1502]: I1213 09:15:26.173386 1502 scope.go:117] "RemoveContainer" containerID="66972a6c4b0b32e33dbc6586aca0c68e24bd547ff529df7918f95aa788de470f"
Dec 13 09:15:26 addons-246361 kubelet[1502]: I1213 09:15:26.289891 1502 scope.go:117] "RemoveContainer" containerID="a9c32a8bf13a339f14d8693b4abf0e4c242bc5a950af143044c5a37fa739ae66"
Dec 13 09:15:34 addons-246361 kubelet[1502]: E1213 09:15:34.377002 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765617334376436885 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 09:15:34 addons-246361 kubelet[1502]: E1213 09:15:34.377025 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765617334376436885 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 09:15:44 addons-246361 kubelet[1502]: E1213 09:15:44.380535 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765617344380136066 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 09:15:44 addons-246361 kubelet[1502]: E1213 09:15:44.380566 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765617344380136066 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 09:15:54 addons-246361 kubelet[1502]: E1213 09:15:54.385400 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765617354384989722 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 09:15:54 addons-246361 kubelet[1502]: E1213 09:15:54.385772 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765617354384989722 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 09:16:04 addons-246361 kubelet[1502]: E1213 09:16:04.389689 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765617364388867829 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 09:16:04 addons-246361 kubelet[1502]: E1213 09:16:04.389775 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765617364388867829 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 09:16:13 addons-246361 kubelet[1502]: I1213 09:16:13.042657 1502 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Dec 13 09:16:14 addons-246361 kubelet[1502]: E1213 09:16:14.393070 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765617374392568786 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 09:16:14 addons-246361 kubelet[1502]: E1213 09:16:14.393096 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765617374392568786 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 09:16:24 addons-246361 kubelet[1502]: E1213 09:16:24.396979 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765617384396426618 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 09:16:24 addons-246361 kubelet[1502]: E1213 09:16:24.397000 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765617384396426618 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 09:16:34 addons-246361 kubelet[1502]: E1213 09:16:34.400608 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765617394399911444 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 09:16:34 addons-246361 kubelet[1502]: E1213 09:16:34.400645 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765617394399911444 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 09:16:42 addons-246361 kubelet[1502]: I1213 09:16:42.044485 1502 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-pcr8k" secret="" err="secret \"gcp-auth\" not found"
Dec 13 09:16:43 addons-246361 kubelet[1502]: I1213 09:16:43.118236 1502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwbs5\" (UniqueName: \"kubernetes.io/projected/3f706e5d-516a-4d79-b9f6-5f8085a46b78-kube-api-access-dwbs5\") pod \"hello-world-app-5d498dc89-9kxwk\" (UID: \"3f706e5d-516a-4d79-b9f6-5f8085a46b78\") " pod="default/hello-world-app-5d498dc89-9kxwk"
Dec 13 09:16:44 addons-246361 kubelet[1502]: E1213 09:16:44.403848 1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765617404403431524 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
Dec 13 09:16:44 addons-246361 kubelet[1502]: E1213 09:16:44.403870 1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765617404403431524 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
==> storage-provisioner [a4877853c63147ba15265057e2c32a56362db34c6d6bbc67ceff075e7fe08686] <==
W1213 09:16:19.198798 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 09:16:21.202521 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 09:16:21.207807 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 09:16:23.212120 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 09:16:23.220004 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 09:16:25.224007 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 09:16:25.229280 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 09:16:27.232617 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 09:16:27.237850 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 09:16:29.241470 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 09:16:29.246873 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 09:16:31.251038 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 09:16:31.260687 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 09:16:33.265540 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 09:16:33.270888 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 09:16:35.274609 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 09:16:35.280167 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 09:16:37.283651 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 09:16:37.290448 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 09:16:39.293448 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 09:16:39.301807 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 09:16:41.305581 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 09:16:41.310893 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 09:16:43.316076 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1213 09:16:43.325569 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-246361 -n addons-246361
helpers_test.go:270: (dbg) Run: kubectl --context addons-246361 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-9kxwk ingress-nginx-admission-create-6zvn2 ingress-nginx-admission-patch-rtxd5
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run: kubectl --context addons-246361 describe pod hello-world-app-5d498dc89-9kxwk ingress-nginx-admission-create-6zvn2 ingress-nginx-admission-patch-rtxd5
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-246361 describe pod hello-world-app-5d498dc89-9kxwk ingress-nginx-admission-create-6zvn2 ingress-nginx-admission-patch-rtxd5: exit status 1 (77.991878ms)
-- stdout --
Name: hello-world-app-5d498dc89-9kxwk
Namespace: default
Priority: 0
Service Account: default
Node: addons-246361/192.168.39.185
Start Time: Sat, 13 Dec 2025 09:16:43 +0000
Labels: app=hello-world-app
pod-template-hash=5d498dc89
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-world-app-5d498dc89
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dwbs5 (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-dwbs5:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2s default-scheduler Successfully assigned default/hello-world-app-5d498dc89-9kxwk to addons-246361
Normal Pulling 2s kubelet Pulling image "docker.io/kicbase/echo-server:1.0"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-6zvn2" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-rtxd5" not found
** /stderr **
helpers_test.go:288: kubectl --context addons-246361 describe pod hello-world-app-5d498dc89-9kxwk ingress-nginx-admission-create-6zvn2 ingress-nginx-admission-patch-rtxd5: exit status 1
addons_test.go:1055: (dbg) Run: out/minikube-linux-amd64 -p addons-246361 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-246361 addons disable ingress-dns --alsologtostderr -v=1: (1.073879141s)
addons_test.go:1055: (dbg) Run: out/minikube-linux-amd64 -p addons-246361 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-246361 addons disable ingress --alsologtostderr -v=1: (7.795214503s)
--- FAIL: TestAddons/parallel/Ingress (159.09s)