=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run: kubectl --context addons-774690 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run: kubectl --context addons-774690 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run: kubectl --context addons-774690 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [ea010f3e-0b70-4331-8ef2-e8dbeb8da0dd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [ea010f3e-0b70-4331-8ef2-e8dbeb8da0dd] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004860254s
I1206 09:15:24.805011 396534 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run: out/minikube-linux-amd64 -p addons-774690 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-774690 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.069173475s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run: kubectl --context addons-774690 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run: out/minikube-linux-amd64 -p addons-774690 ip
addons_test.go:299: (dbg) Run: nslookup hello-john.test 192.168.39.249
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-774690 -n addons-774690
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p addons-774690 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-774690 logs -n 25: (1.142814232s)
helpers_test.go:260: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ delete │ -p download-only-548578 │ download-only-548578 │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:12 UTC │
│ start │ --download-only -p binary-mirror-961783 --alsologtostderr --binary-mirror http://127.0.0.1:35409 --driver=kvm2 --container-runtime=crio │ binary-mirror-961783 │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ │
│ delete │ -p binary-mirror-961783 │ binary-mirror-961783 │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:12 UTC │
│ addons │ disable dashboard -p addons-774690 │ addons-774690 │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ │
│ addons │ enable dashboard -p addons-774690 │ addons-774690 │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ │
│ start │ -p addons-774690 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-774690 │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:14 UTC │
│ addons │ addons-774690 addons disable volcano --alsologtostderr -v=1 │ addons-774690 │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │ 06 Dec 25 09:14 UTC │
│ addons │ addons-774690 addons disable gcp-auth --alsologtostderr -v=1 │ addons-774690 │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │ 06 Dec 25 09:14 UTC │
│ addons │ enable headlamp -p addons-774690 --alsologtostderr -v=1 │ addons-774690 │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │ 06 Dec 25 09:14 UTC │
│ addons │ addons-774690 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-774690 │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │ 06 Dec 25 09:14 UTC │
│ addons │ addons-774690 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-774690 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
│ ssh │ addons-774690 ssh cat /opt/local-path-provisioner/pvc-6faf3b95-bd02-4761-afb7-95d974158c7c_default_test-pvc/file1 │ addons-774690 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
│ addons │ addons-774690 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-774690 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
│ addons │ addons-774690 addons disable headlamp --alsologtostderr -v=1 │ addons-774690 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
│ ip │ addons-774690 ip │ addons-774690 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
│ addons │ addons-774690 addons disable registry --alsologtostderr -v=1 │ addons-774690 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
│ addons │ addons-774690 addons disable yakd --alsologtostderr -v=1 │ addons-774690 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
│ addons │ addons-774690 addons disable metrics-server --alsologtostderr -v=1 │ addons-774690 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
│ addons │ addons-774690 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-774690 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
│ ssh │ addons-774690 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-774690 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-774690 │ addons-774690 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
│ addons │ addons-774690 addons disable registry-creds --alsologtostderr -v=1 │ addons-774690 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
│ addons │ addons-774690 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-774690 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
│ addons │ addons-774690 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-774690 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:16 UTC │
│ ip │ addons-774690 ip │ addons-774690 │ jenkins │ v1.37.0 │ 06 Dec 25 09:17 UTC │ 06 Dec 25 09:17 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/06 09:12:21
Running on machine: ubuntu-20-agent-8
Binary: Built with gc go1.25.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1206 09:12:21.264725 397455 out.go:360] Setting OutFile to fd 1 ...
I1206 09:12:21.265041 397455 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:12:21.265053 397455 out.go:374] Setting ErrFile to fd 2...
I1206 09:12:21.265059 397455 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:12:21.265288 397455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
I1206 09:12:21.265896 397455 out.go:368] Setting JSON to false
I1206 09:12:21.266842 397455 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3281,"bootTime":1765009060,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1206 09:12:21.266908 397455 start.go:143] virtualization: kvm guest
I1206 09:12:21.269023 397455 out.go:179] * [addons-774690] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1206 09:12:21.270564 397455 out.go:179] - MINIKUBE_LOCATION=22047
I1206 09:12:21.270608 397455 notify.go:221] Checking for updates...
I1206 09:12:21.272959 397455 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1206 09:12:21.274303 397455 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
I1206 09:12:21.275586 397455 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
I1206 09:12:21.277028 397455 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1206 09:12:21.278359 397455 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1206 09:12:21.279684 397455 driver.go:422] Setting default libvirt URI to qemu:///system
I1206 09:12:21.310872 397455 out.go:179] * Using the kvm2 driver based on user configuration
I1206 09:12:21.312242 397455 start.go:309] selected driver: kvm2
I1206 09:12:21.312259 397455 start.go:927] validating driver "kvm2" against <nil>
I1206 09:12:21.312274 397455 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1206 09:12:21.313315 397455 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1206 09:12:21.313622 397455 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1206 09:12:21.313656 397455 cni.go:84] Creating CNI manager for ""
I1206 09:12:21.313700 397455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1206 09:12:21.313723 397455 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1206 09:12:21.313784 397455 start.go:353] cluster config:
{Name:addons-774690 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-774690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
I1206 09:12:21.313931 397455 iso.go:125] acquiring lock: {Name:mkf36bf2c9901302dc74c7ac02d02007e6a978f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1206 09:12:21.315576 397455 out.go:179] * Starting "addons-774690" primary control-plane node in "addons-774690" cluster
I1206 09:12:21.316897 397455 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1206 09:12:21.316929 397455 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
I1206 09:12:21.316951 397455 cache.go:65] Caching tarball of preloaded images
I1206 09:12:21.317038 397455 preload.go:238] Found /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1206 09:12:21.317049 397455 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
I1206 09:12:21.317363 397455 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/config.json ...
I1206 09:12:21.317385 397455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/config.json: {Name:mk4ced784f71219404f915ebf50e084aa875dc8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 09:12:21.317539 397455 start.go:360] acquireMachinesLock for addons-774690: {Name:mk0e8456872a81874c47f1b4b5997728e70c766d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1206 09:12:21.317585 397455 start.go:364] duration metric: took 31.534µs to acquireMachinesLock for "addons-774690"
I1206 09:12:21.317602 397455 start.go:93] Provisioning new machine with config: &{Name:addons-774690 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-774690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1206 09:12:21.317657 397455 start.go:125] createHost starting for "" (driver="kvm2")
I1206 09:12:21.319345 397455 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1206 09:12:21.319530 397455 start.go:159] libmachine.API.Create for "addons-774690" (driver="kvm2")
I1206 09:12:21.319566 397455 client.go:173] LocalClient.Create starting
I1206 09:12:21.319684 397455 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem
I1206 09:12:21.386408 397455 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem
I1206 09:12:21.505973 397455 main.go:143] libmachine: creating domain...
I1206 09:12:21.505995 397455 main.go:143] libmachine: creating network...
I1206 09:12:21.507603 397455 main.go:143] libmachine: found existing default network
I1206 09:12:21.507893 397455 main.go:143] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1206 09:12:21.508529 397455 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d26980}
I1206 09:12:21.508651 397455 main.go:143] libmachine: defining private network:
<network>
<name>mk-addons-774690</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1206 09:12:21.515041 397455 main.go:143] libmachine: creating private network mk-addons-774690 192.168.39.0/24...
I1206 09:12:21.584983 397455 main.go:143] libmachine: private network mk-addons-774690 192.168.39.0/24 created
I1206 09:12:21.585281 397455 main.go:143] libmachine: <network>
<name>mk-addons-774690</name>
<uuid>0f5e1b32-f92f-4225-b6b5-e6d16a15f14d</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:8f:87:80'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1206 09:12:21.585328 397455 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690 ...
I1206 09:12:21.585351 397455 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22047-392561/.minikube/cache/iso/amd64/minikube-v1.37.0-1764843329-22032-amd64.iso
I1206 09:12:21.585361 397455 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22047-392561/.minikube
I1206 09:12:21.585432 397455 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22047-392561/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22047-392561/.minikube/cache/iso/amd64/minikube-v1.37.0-1764843329-22032-amd64.iso...
I1206 09:12:21.864874 397455 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa...
I1206 09:12:21.887234 397455 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/addons-774690.rawdisk...
I1206 09:12:21.887282 397455 main.go:143] libmachine: Writing magic tar header
I1206 09:12:21.887324 397455 main.go:143] libmachine: Writing SSH key tar header
I1206 09:12:21.887408 397455 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690 ...
I1206 09:12:21.887470 397455 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690
I1206 09:12:21.887497 397455 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690 (perms=drwx------)
I1206 09:12:21.887516 397455 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-392561/.minikube/machines
I1206 09:12:21.887529 397455 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-392561/.minikube/machines (perms=drwxr-xr-x)
I1206 09:12:21.887541 397455 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-392561/.minikube
I1206 09:12:21.887549 397455 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-392561/.minikube (perms=drwxr-xr-x)
I1206 09:12:21.887559 397455 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-392561
I1206 09:12:21.887567 397455 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-392561 (perms=drwxrwxr-x)
I1206 09:12:21.887577 397455 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1206 09:12:21.887584 397455 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1206 09:12:21.887595 397455 main.go:143] libmachine: checking permissions on dir: /home/jenkins
I1206 09:12:21.887602 397455 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1206 09:12:21.887612 397455 main.go:143] libmachine: checking permissions on dir: /home
I1206 09:12:21.887618 397455 main.go:143] libmachine: skipping /home - not owner
I1206 09:12:21.887624 397455 main.go:143] libmachine: defining domain...
I1206 09:12:21.888933 397455 main.go:143] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-774690</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/addons-774690.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-774690'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1206 09:12:21.897421 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:d6:5e:ab in network default
I1206 09:12:21.898138 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:21.898161 397455 main.go:143] libmachine: starting domain...
I1206 09:12:21.898168 397455 main.go:143] libmachine: ensuring networks are active...
I1206 09:12:21.899205 397455 main.go:143] libmachine: Ensuring network default is active
I1206 09:12:21.899683 397455 main.go:143] libmachine: Ensuring network mk-addons-774690 is active
I1206 09:12:21.900484 397455 main.go:143] libmachine: getting domain XML...
I1206 09:12:21.901800 397455 main.go:143] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-774690</name>
<uuid>6637641e-4385-4e2f-bcf4-adf9edc82956</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/addons-774690.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:02:15:5c'/>
<source network='mk-addons-774690'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:d6:5e:ab'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1206 09:12:23.218876 397455 main.go:143] libmachine: waiting for domain to start...
I1206 09:12:23.220380 397455 main.go:143] libmachine: domain is now running
I1206 09:12:23.220398 397455 main.go:143] libmachine: waiting for IP...
I1206 09:12:23.221161 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:23.221664 397455 main.go:143] libmachine: no network interface addresses found for domain addons-774690 (source=lease)
I1206 09:12:23.221694 397455 main.go:143] libmachine: trying to list again with source=arp
I1206 09:12:23.221947 397455 main.go:143] libmachine: unable to find current IP address of domain addons-774690 in network mk-addons-774690 (interfaces detected: [])
I1206 09:12:23.221995 397455 retry.go:31] will retry after 295.292313ms: waiting for domain to come up
I1206 09:12:23.518620 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:23.519222 397455 main.go:143] libmachine: no network interface addresses found for domain addons-774690 (source=lease)
I1206 09:12:23.519240 397455 main.go:143] libmachine: trying to list again with source=arp
I1206 09:12:23.519595 397455 main.go:143] libmachine: unable to find current IP address of domain addons-774690 in network mk-addons-774690 (interfaces detected: [])
I1206 09:12:23.519635 397455 retry.go:31] will retry after 377.089345ms: waiting for domain to come up
I1206 09:12:23.898090 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:23.898641 397455 main.go:143] libmachine: no network interface addresses found for domain addons-774690 (source=lease)
I1206 09:12:23.898659 397455 main.go:143] libmachine: trying to list again with source=arp
I1206 09:12:23.898945 397455 main.go:143] libmachine: unable to find current IP address of domain addons-774690 in network mk-addons-774690 (interfaces detected: [])
I1206 09:12:23.899004 397455 retry.go:31] will retry after 397.605073ms: waiting for domain to come up
I1206 09:12:24.299024 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:24.299615 397455 main.go:143] libmachine: no network interface addresses found for domain addons-774690 (source=lease)
I1206 09:12:24.299637 397455 main.go:143] libmachine: trying to list again with source=arp
I1206 09:12:24.299976 397455 main.go:143] libmachine: unable to find current IP address of domain addons-774690 in network mk-addons-774690 (interfaces detected: [])
I1206 09:12:24.300018 397455 retry.go:31] will retry after 489.121787ms: waiting for domain to come up
I1206 09:12:24.790564 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:24.791070 397455 main.go:143] libmachine: no network interface addresses found for domain addons-774690 (source=lease)
I1206 09:12:24.791086 397455 main.go:143] libmachine: trying to list again with source=arp
I1206 09:12:24.791356 397455 main.go:143] libmachine: unable to find current IP address of domain addons-774690 in network mk-addons-774690 (interfaces detected: [])
I1206 09:12:24.791400 397455 retry.go:31] will retry after 547.775883ms: waiting for domain to come up
I1206 09:12:25.341430 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:25.342187 397455 main.go:143] libmachine: no network interface addresses found for domain addons-774690 (source=lease)
I1206 09:12:25.342205 397455 main.go:143] libmachine: trying to list again with source=arp
I1206 09:12:25.342553 397455 main.go:143] libmachine: unable to find current IP address of domain addons-774690 in network mk-addons-774690 (interfaces detected: [])
I1206 09:12:25.342606 397455 retry.go:31] will retry after 575.42966ms: waiting for domain to come up
I1206 09:12:25.919580 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:25.920138 397455 main.go:143] libmachine: no network interface addresses found for domain addons-774690 (source=lease)
I1206 09:12:25.920157 397455 main.go:143] libmachine: trying to list again with source=arp
I1206 09:12:25.920537 397455 main.go:143] libmachine: unable to find current IP address of domain addons-774690 in network mk-addons-774690 (interfaces detected: [])
I1206 09:12:25.920595 397455 retry.go:31] will retry after 942.250925ms: waiting for domain to come up
I1206 09:12:26.864846 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:26.865422 397455 main.go:143] libmachine: no network interface addresses found for domain addons-774690 (source=lease)
I1206 09:12:26.865438 397455 main.go:143] libmachine: trying to list again with source=arp
I1206 09:12:26.865763 397455 main.go:143] libmachine: unable to find current IP address of domain addons-774690 in network mk-addons-774690 (interfaces detected: [])
I1206 09:12:26.865801 397455 retry.go:31] will retry after 1.477195332s: waiting for domain to come up
I1206 09:12:28.345783 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:28.346336 397455 main.go:143] libmachine: no network interface addresses found for domain addons-774690 (source=lease)
I1206 09:12:28.346356 397455 main.go:143] libmachine: trying to list again with source=arp
I1206 09:12:28.346643 397455 main.go:143] libmachine: unable to find current IP address of domain addons-774690 in network mk-addons-774690 (interfaces detected: [])
I1206 09:12:28.346693 397455 retry.go:31] will retry after 1.655335883s: waiting for domain to come up
I1206 09:12:30.004609 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:30.005128 397455 main.go:143] libmachine: no network interface addresses found for domain addons-774690 (source=lease)
I1206 09:12:30.005142 397455 main.go:143] libmachine: trying to list again with source=arp
I1206 09:12:30.005422 397455 main.go:143] libmachine: unable to find current IP address of domain addons-774690 in network mk-addons-774690 (interfaces detected: [])
I1206 09:12:30.005462 397455 retry.go:31] will retry after 1.662112692s: waiting for domain to come up
I1206 09:12:31.670161 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:31.670814 397455 main.go:143] libmachine: no network interface addresses found for domain addons-774690 (source=lease)
I1206 09:12:31.670832 397455 main.go:143] libmachine: trying to list again with source=arp
I1206 09:12:31.671153 397455 main.go:143] libmachine: unable to find current IP address of domain addons-774690 in network mk-addons-774690 (interfaces detected: [])
I1206 09:12:31.671199 397455 retry.go:31] will retry after 2.355274201s: waiting for domain to come up
I1206 09:12:34.029809 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:34.030267 397455 main.go:143] libmachine: no network interface addresses found for domain addons-774690 (source=lease)
I1206 09:12:34.030279 397455 main.go:143] libmachine: trying to list again with source=arp
I1206 09:12:34.030531 397455 main.go:143] libmachine: unable to find current IP address of domain addons-774690 in network mk-addons-774690 (interfaces detected: [])
I1206 09:12:34.030566 397455 retry.go:31] will retry after 2.915121356s: waiting for domain to come up
I1206 09:12:36.946965 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:36.947469 397455 main.go:143] libmachine: domain addons-774690 has current primary IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:36.947482 397455 main.go:143] libmachine: found domain IP: 192.168.39.249
I1206 09:12:36.947490 397455 main.go:143] libmachine: reserving static IP address...
I1206 09:12:36.947817 397455 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-774690", mac: "52:54:00:02:15:5c", ip: "192.168.39.249"} in network mk-addons-774690
I1206 09:12:37.143042 397455 main.go:143] libmachine: reserved static IP address 192.168.39.249 for domain addons-774690
I1206 09:12:37.143071 397455 main.go:143] libmachine: waiting for SSH...
I1206 09:12:37.143079 397455 main.go:143] libmachine: Getting to WaitForSSH function...
I1206 09:12:37.145931 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:37.146471 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:minikube Clientid:01:52:54:00:02:15:5c}
I1206 09:12:37.146514 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:37.146803 397455 main.go:143] libmachine: Using SSH client type: native
I1206 09:12:37.147068 397455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.249 22 <nil> <nil>}
I1206 09:12:37.147081 397455 main.go:143] libmachine: About to run SSH command:
exit 0
I1206 09:12:37.266580 397455 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1206 09:12:37.266986 397455 main.go:143] libmachine: domain creation complete
I1206 09:12:37.268496 397455 machine.go:94] provisionDockerMachine start ...
I1206 09:12:37.270840 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:37.271142 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:37.271165 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:37.271314 397455 main.go:143] libmachine: Using SSH client type: native
I1206 09:12:37.271534 397455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.249 22 <nil> <nil>}
I1206 09:12:37.271547 397455 main.go:143] libmachine: About to run SSH command:
hostname
I1206 09:12:37.385585 397455 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
I1206 09:12:37.385627 397455 buildroot.go:166] provisioning hostname "addons-774690"
I1206 09:12:37.388866 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:37.389328 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:37.389352 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:37.389736 397455 main.go:143] libmachine: Using SSH client type: native
I1206 09:12:37.389963 397455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.249 22 <nil> <nil>}
I1206 09:12:37.389977 397455 main.go:143] libmachine: About to run SSH command:
sudo hostname addons-774690 && echo "addons-774690" | sudo tee /etc/hostname
I1206 09:12:37.522119 397455 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-774690
I1206 09:12:37.525221 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:37.525593 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:37.525622 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:37.525971 397455 main.go:143] libmachine: Using SSH client type: native
I1206 09:12:37.526200 397455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.249 22 <nil> <nil>}
I1206 09:12:37.526223 397455 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-774690' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-774690/g' /etc/hosts;
else
echo '127.0.1.1 addons-774690' | sudo tee -a /etc/hosts;
fi
fi
I1206 09:12:37.654727 397455 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1206 09:12:37.654771 397455 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22047-392561/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-392561/.minikube}
I1206 09:12:37.654795 397455 buildroot.go:174] setting up certificates
I1206 09:12:37.654816 397455 provision.go:84] configureAuth start
I1206 09:12:37.657627 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:37.658129 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:37.658152 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:37.660461 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:37.660865 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:37.660927 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:37.661069 397455 provision.go:143] copyHostCerts
I1206 09:12:37.661131 397455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/cert.pem (1123 bytes)
I1206 09:12:37.661240 397455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/key.pem (1679 bytes)
I1206 09:12:37.661296 397455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/ca.pem (1082 bytes)
I1206 09:12:37.661342 397455 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem org=jenkins.addons-774690 san=[127.0.0.1 192.168.39.249 addons-774690 localhost minikube]
I1206 09:12:37.716816 397455 provision.go:177] copyRemoteCerts
I1206 09:12:37.716878 397455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1206 09:12:37.719884 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:37.720344 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:37.720372 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:37.720600 397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
I1206 09:12:37.811076 397455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1206 09:12:37.852903 397455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1206 09:12:37.881451 397455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1206 09:12:37.909622 397455 provision.go:87] duration metric: took 254.786505ms to configureAuth
I1206 09:12:37.909655 397455 buildroot.go:189] setting minikube options for container-runtime
I1206 09:12:37.909870 397455 config.go:182] Loaded profile config "addons-774690": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 09:12:37.913098 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:37.913433 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:37.913450 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:37.913601 397455 main.go:143] libmachine: Using SSH client type: native
I1206 09:12:37.913856 397455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.249 22 <nil> <nil>}
I1206 09:12:37.913875 397455 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1206 09:12:38.157342 397455 main.go:143] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1206 09:12:38.157399 397455 machine.go:97] duration metric: took 888.860096ms to provisionDockerMachine
I1206 09:12:38.157414 397455 client.go:176] duration metric: took 16.83784032s to LocalClient.Create
I1206 09:12:38.157441 397455 start.go:167] duration metric: took 16.837921755s to libmachine.API.Create "addons-774690"
I1206 09:12:38.157455 397455 start.go:293] postStartSetup for "addons-774690" (driver="kvm2")
I1206 09:12:38.157466 397455 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1206 09:12:38.157549 397455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1206 09:12:38.160853 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:38.161278 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:38.161309 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:38.161525 397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
I1206 09:12:38.250161 397455 ssh_runner.go:195] Run: cat /etc/os-release
I1206 09:12:38.254967 397455 info.go:137] Remote host: Buildroot 2025.02
I1206 09:12:38.255000 397455 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-392561/.minikube/addons for local assets ...
I1206 09:12:38.255067 397455 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-392561/.minikube/files for local assets ...
I1206 09:12:38.255090 397455 start.go:296] duration metric: took 97.627973ms for postStartSetup
I1206 09:12:38.258373 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:38.258978 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:38.259015 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:38.259296 397455 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/config.json ...
I1206 09:12:38.259548 397455 start.go:128] duration metric: took 16.941878107s to createHost
I1206 09:12:38.261660 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:38.261985 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:38.262004 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:38.262151 397455 main.go:143] libmachine: Using SSH client type: native
I1206 09:12:38.262348 397455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.249 22 <nil> <nil>}
I1206 09:12:38.262357 397455 main.go:143] libmachine: About to run SSH command:
date +%s.%N
I1206 09:12:38.376010 397455 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765012358.336236686
I1206 09:12:38.376034 397455 fix.go:216] guest clock: 1765012358.336236686
I1206 09:12:38.376042 397455 fix.go:229] Guest: 2025-12-06 09:12:38.336236686 +0000 UTC Remote: 2025-12-06 09:12:38.259562404 +0000 UTC m=+17.047061298 (delta=76.674282ms)
I1206 09:12:38.376058 397455 fix.go:200] guest clock delta is within tolerance: 76.674282ms
I1206 09:12:38.376064 397455 start.go:83] releasing machines lock for "addons-774690", held for 17.058470853s
I1206 09:12:38.379190 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:38.379761 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:38.379789 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:38.380398 397455 ssh_runner.go:195] Run: cat /version.json
I1206 09:12:38.380556 397455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1206 09:12:38.383688 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:38.383940 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:38.384183 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:38.384214 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:38.384395 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:38.384410 397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
I1206 09:12:38.384426 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:38.384664 397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
I1206 09:12:38.467260 397455 ssh_runner.go:195] Run: systemctl --version
I1206 09:12:38.503268 397455 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1206 09:12:38.666096 397455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1206 09:12:38.673387 397455 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1206 09:12:38.673458 397455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1206 09:12:38.693372 397455 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1206 09:12:38.693402 397455 start.go:496] detecting cgroup driver to use...
I1206 09:12:38.693465 397455 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1206 09:12:38.714463 397455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1206 09:12:38.731825 397455 docker.go:218] disabling cri-docker service (if available) ...
I1206 09:12:38.731907 397455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1206 09:12:38.749950 397455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1206 09:12:38.766932 397455 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1206 09:12:38.913090 397455 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1206 09:12:39.116258 397455 docker.go:234] disabling docker service ...
I1206 09:12:39.116351 397455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1206 09:12:39.132879 397455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1206 09:12:39.148524 397455 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1206 09:12:39.302698 397455 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1206 09:12:39.445663 397455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1206 09:12:39.461177 397455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1206 09:12:39.482688 397455 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1206 09:12:39.482790 397455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1206 09:12:39.494228 397455 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1206 09:12:39.494290 397455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1206 09:12:39.506627 397455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1206 09:12:39.518604 397455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1206 09:12:39.531385 397455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1206 09:12:39.544254 397455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1206 09:12:39.556809 397455 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1206 09:12:39.576866 397455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1206 09:12:39.589032 397455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1206 09:12:39.599434 397455 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1206 09:12:39.599507 397455 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1206 09:12:39.619804 397455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1206 09:12:39.631689 397455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1206 09:12:39.768694 397455 ssh_runner.go:195] Run: sudo systemctl restart crio
I1206 09:12:39.882663 397455 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1206 09:12:39.882794 397455 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1206 09:12:39.888045 397455 start.go:564] Will wait 60s for crictl version
I1206 09:12:39.888125 397455 ssh_runner.go:195] Run: which crictl
I1206 09:12:39.891876 397455 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1206 09:12:39.926288 397455 start.go:580] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1206 09:12:39.926416 397455 ssh_runner.go:195] Run: crio --version
I1206 09:12:39.957157 397455 ssh_runner.go:195] Run: crio --version
I1206 09:12:39.990070 397455 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
I1206 09:12:39.994329 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:39.994843 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:39.994883 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:39.995205 397455 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1206 09:12:40.000071 397455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1206 09:12:40.015283 397455 kubeadm.go:884] updating cluster {Name:addons-774690 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-774690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1206 09:12:40.015406 397455 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1206 09:12:40.015449 397455 ssh_runner.go:195] Run: sudo crictl images --output json
I1206 09:12:40.044958 397455 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
I1206 09:12:40.045030 397455 ssh_runner.go:195] Run: which lz4
I1206 09:12:40.049475 397455 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1206 09:12:40.054121 397455 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1206 09:12:40.054153 397455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
I1206 09:12:41.262302 397455 crio.go:462] duration metric: took 1.212862096s to copy over tarball
I1206 09:12:41.262407 397455 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1206 09:12:42.660878 397455 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.398437025s)
I1206 09:12:42.660909 397455 crio.go:469] duration metric: took 1.398565722s to extract the tarball
I1206 09:12:42.660917 397455 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1206 09:12:42.697982 397455 ssh_runner.go:195] Run: sudo crictl images --output json
I1206 09:12:42.738954 397455 crio.go:514] all images are preloaded for cri-o runtime.
I1206 09:12:42.738983 397455 cache_images.go:86] Images are preloaded, skipping loading
I1206 09:12:42.738993 397455 kubeadm.go:935] updating node { 192.168.39.249 8443 v1.34.2 crio true true} ...
I1206 09:12:42.739090 397455 kubeadm.go:947] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-774690 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
[Install]
config:
{KubernetesVersion:v1.34.2 ClusterName:addons-774690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1206 09:12:42.739165 397455 ssh_runner.go:195] Run: crio config
I1206 09:12:42.786928 397455 cni.go:84] Creating CNI manager for ""
I1206 09:12:42.786956 397455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1206 09:12:42.786981 397455 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1206 09:12:42.787012 397455 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-774690 NodeName:addons-774690 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1206 09:12:42.787195 397455 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.249
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-774690"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.249"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1206 09:12:42.787280 397455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
I1206 09:12:42.799481 397455 binaries.go:51] Found k8s binaries, skipping transfer
I1206 09:12:42.799561 397455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1206 09:12:42.811281 397455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
I1206 09:12:42.832371 397455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1206 09:12:42.852252 397455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
I1206 09:12:42.872927 397455 ssh_runner.go:195] Run: grep 192.168.39.249 control-plane.minikube.internal$ /etc/hosts
I1206 09:12:42.876971 397455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.249 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1206 09:12:42.891013 397455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1206 09:12:43.027643 397455 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1206 09:12:43.056787 397455 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690 for IP: 192.168.39.249
I1206 09:12:43.056818 397455 certs.go:195] generating shared ca certs ...
I1206 09:12:43.056837 397455 certs.go:227] acquiring lock for ca certs: {Name:mk3de97d1b446a24abef5e763ff5edd1f090afa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 09:12:43.057053 397455 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-392561/.minikube/ca.key
I1206 09:12:43.191560 397455 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt ...
I1206 09:12:43.191592 397455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt: {Name:mk73781a6e0b099870c6ec5e2b3d5f6976131c61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 09:12:43.191778 397455 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-392561/.minikube/ca.key ...
I1206 09:12:43.191791 397455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/ca.key: {Name:mka4a65b4a64d945c4fff99c29e6abe899a87854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 09:12:43.191867 397455 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.key
I1206 09:12:43.242567 397455 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.crt ...
I1206 09:12:43.242599 397455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.crt: {Name:mk5b017c0690420f6e772284318d221ff6ca606a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 09:12:43.242776 397455 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.key ...
I1206 09:12:43.242789 397455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.key: {Name:mk26268c039405e81f93848b8003ab79c2f94036 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 09:12:43.242858 397455 certs.go:257] generating profile certs ...
I1206 09:12:43.242951 397455 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.key
I1206 09:12:43.242971 397455 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt with IP's: []
I1206 09:12:43.285307 397455 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt ...
I1206 09:12:43.285341 397455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: {Name:mkc19579499f5e8323c4e87c54d6b9bb0d613130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 09:12:43.285515 397455 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.key ...
I1206 09:12:43.285527 397455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.key: {Name:mkeb31e4f2ec712c5cc198771ea9e70f2163d4ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 09:12:43.285599 397455 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/apiserver.key.a10d728d
I1206 09:12:43.285618 397455 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/apiserver.crt.a10d728d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249]
I1206 09:12:43.337889 397455 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/apiserver.crt.a10d728d ...
I1206 09:12:43.337922 397455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/apiserver.crt.a10d728d: {Name:mkba3a702aa3f9201be378f2005263b993e3ba17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 09:12:43.338094 397455 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/apiserver.key.a10d728d ...
I1206 09:12:43.338117 397455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/apiserver.key.a10d728d: {Name:mk043b1af64033ecc95a6f119f4ed39271950939 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 09:12:43.338197 397455 certs.go:382] copying /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/apiserver.crt.a10d728d -> /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/apiserver.crt
I1206 09:12:43.338271 397455 certs.go:386] copying /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/apiserver.key.a10d728d -> /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/apiserver.key
I1206 09:12:43.338322 397455 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/proxy-client.key
I1206 09:12:43.338341 397455 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/proxy-client.crt with IP's: []
I1206 09:12:43.459333 397455 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/proxy-client.crt ...
I1206 09:12:43.459365 397455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/proxy-client.crt: {Name:mk1a1a650f3f90b232c109baf3b368c83926b35e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 09:12:43.459582 397455 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/proxy-client.key ...
I1206 09:12:43.459602 397455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/proxy-client.key: {Name:mk34cfa33c32d5a45507e866f9c305c10bec11fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 09:12:43.459834 397455 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem (1675 bytes)
I1206 09:12:43.459880 397455 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem (1082 bytes)
I1206 09:12:43.459906 397455 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem (1123 bytes)
I1206 09:12:43.459930 397455 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem (1679 bytes)
I1206 09:12:43.460524 397455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1206 09:12:43.489859 397455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1206 09:12:43.517940 397455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1206 09:12:43.546521 397455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1206 09:12:43.575530 397455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1206 09:12:43.606164 397455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1206 09:12:43.637528 397455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1206 09:12:43.668279 397455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1206 09:12:43.698861 397455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1206 09:12:43.733919 397455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1206 09:12:43.761683 397455 ssh_runner.go:195] Run: openssl version
I1206 09:12:43.768401 397455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1206 09:12:43.785336 397455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1206 09:12:43.797484 397455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1206 09:12:43.803117 397455 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 6 09:12 /usr/share/ca-certificates/minikubeCA.pem
I1206 09:12:43.803185 397455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1206 09:12:43.810527 397455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1206 09:12:43.822046 397455 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1206 09:12:43.833465 397455 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1206 09:12:43.838133 397455 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1206 09:12:43.838187 397455 kubeadm.go:401] StartCluster: {Name:addons-774690 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-774690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1206 09:12:43.838254 397455 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1206 09:12:43.838316 397455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1206 09:12:43.870115 397455 cri.go:89] found id: ""
I1206 09:12:43.870209 397455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1206 09:12:43.882255 397455 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1206 09:12:43.893865 397455 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1206 09:12:43.905404 397455 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1206 09:12:43.905427 397455 kubeadm.go:158] found existing configuration files:
I1206 09:12:43.905474 397455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1206 09:12:43.915893 397455 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1206 09:12:43.915959 397455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1206 09:12:43.927341 397455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1206 09:12:43.937764 397455 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1206 09:12:43.937842 397455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1206 09:12:43.948955 397455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1206 09:12:43.959817 397455 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1206 09:12:43.959898 397455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1206 09:12:43.970828 397455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1206 09:12:43.981392 397455 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1206 09:12:43.981461 397455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1206 09:12:43.992794 397455 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1206 09:12:44.138818 397455 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1206 09:12:56.006151 397455 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
I1206 09:12:56.006220 397455 kubeadm.go:319] [preflight] Running pre-flight checks
I1206 09:12:56.006314 397455 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1206 09:12:56.006428 397455 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1206 09:12:56.006538 397455 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1206 09:12:56.006635 397455 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1206 09:12:56.008342 397455 out.go:252] - Generating certificates and keys ...
I1206 09:12:56.008444 397455 kubeadm.go:319] [certs] Using existing ca certificate authority
I1206 09:12:56.008527 397455 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1206 09:12:56.008630 397455 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1206 09:12:56.008734 397455 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1206 09:12:56.008849 397455 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1206 09:12:56.008952 397455 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1206 09:12:56.009038 397455 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1206 09:12:56.009202 397455 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-774690 localhost] and IPs [192.168.39.249 127.0.0.1 ::1]
I1206 09:12:56.009270 397455 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1206 09:12:56.009427 397455 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-774690 localhost] and IPs [192.168.39.249 127.0.0.1 ::1]
I1206 09:12:56.009516 397455 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1206 09:12:56.009624 397455 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1206 09:12:56.009696 397455 kubeadm.go:319] [certs] Generating "sa" key and public key
I1206 09:12:56.009801 397455 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1206 09:12:56.009887 397455 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1206 09:12:56.009963 397455 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1206 09:12:56.010037 397455 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1206 09:12:56.010132 397455 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1206 09:12:56.010213 397455 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1206 09:12:56.010314 397455 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1206 09:12:56.010412 397455 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1206 09:12:56.012213 397455 out.go:252] - Booting up control plane ...
I1206 09:12:56.012323 397455 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1206 09:12:56.012418 397455 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1206 09:12:56.012529 397455 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1206 09:12:56.012680 397455 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1206 09:12:56.012828 397455 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1206 09:12:56.012948 397455 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1206 09:12:56.013024 397455 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1206 09:12:56.013057 397455 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1206 09:12:56.013223 397455 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1206 09:12:56.013308 397455 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1206 09:12:56.013354 397455 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.0020806s
I1206 09:12:56.013425 397455 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1206 09:12:56.013500 397455 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.249:8443/livez
I1206 09:12:56.013578 397455 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1206 09:12:56.013640 397455 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1206 09:12:56.013723 397455 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.190831779s
I1206 09:12:56.013780 397455 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.148908871s
I1206 09:12:56.013839 397455 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502336925s
I1206 09:12:56.013922 397455 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1206 09:12:56.014019 397455 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1206 09:12:56.014065 397455 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1206 09:12:56.014208 397455 kubeadm.go:319] [mark-control-plane] Marking the node addons-774690 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1206 09:12:56.014256 397455 kubeadm.go:319] [bootstrap-token] Using token: hq1x23.tb70g8aq8wzcy4j9
I1206 09:12:56.015684 397455 out.go:252] - Configuring RBAC rules ...
I1206 09:12:56.015781 397455 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1206 09:12:56.015877 397455 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1206 09:12:56.016044 397455 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1206 09:12:56.016199 397455 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1206 09:12:56.016322 397455 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1206 09:12:56.016443 397455 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1206 09:12:56.016572 397455 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1206 09:12:56.016633 397455 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1206 09:12:56.016700 397455 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1206 09:12:56.016721 397455 kubeadm.go:319]
I1206 09:12:56.016808 397455 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1206 09:12:56.016821 397455 kubeadm.go:319]
I1206 09:12:56.016917 397455 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1206 09:12:56.016925 397455 kubeadm.go:319]
I1206 09:12:56.016958 397455 kubeadm.go:319] mkdir -p $HOME/.kube
I1206 09:12:56.017042 397455 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1206 09:12:56.017112 397455 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1206 09:12:56.017121 397455 kubeadm.go:319]
I1206 09:12:56.017193 397455 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1206 09:12:56.017201 397455 kubeadm.go:319]
I1206 09:12:56.017266 397455 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1206 09:12:56.017275 397455 kubeadm.go:319]
I1206 09:12:56.017344 397455 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1206 09:12:56.017445 397455 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1206 09:12:56.017536 397455 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1206 09:12:56.017545 397455 kubeadm.go:319]
I1206 09:12:56.017623 397455 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1206 09:12:56.017694 397455 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1206 09:12:56.017700 397455 kubeadm.go:319]
I1206 09:12:56.017788 397455 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hq1x23.tb70g8aq8wzcy4j9 \
I1206 09:12:56.017924 397455 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:94494b00f450bcad667cd30e10b7d2bac57a4f821af5dc44bcd0f6ad77a7145a \
I1206 09:12:56.017964 397455 kubeadm.go:319] --control-plane
I1206 09:12:56.017974 397455 kubeadm.go:319]
I1206 09:12:56.018085 397455 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1206 09:12:56.018099 397455 kubeadm.go:319]
I1206 09:12:56.018208 397455 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hq1x23.tb70g8aq8wzcy4j9 \
I1206 09:12:56.018349 397455 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:94494b00f450bcad667cd30e10b7d2bac57a4f821af5dc44bcd0f6ad77a7145a
I1206 09:12:56.018361 397455 cni.go:84] Creating CNI manager for ""
I1206 09:12:56.018370 397455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1206 09:12:56.020054 397455 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1206 09:12:56.021408 397455 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1206 09:12:56.044361 397455 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1206 09:12:56.069426 397455 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1206 09:12:56.069543 397455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1206 09:12:56.069543 397455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-774690 minikube.k8s.io/updated_at=2025_12_06T09_12_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4 minikube.k8s.io/name=addons-774690 minikube.k8s.io/primary=true
I1206 09:12:56.259391 397455 ops.go:34] apiserver oom_adj: -16
I1206 09:12:56.259407 397455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1206 09:12:56.760231 397455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1206 09:12:57.259836 397455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1206 09:12:57.759671 397455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1206 09:12:58.260437 397455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1206 09:12:58.760533 397455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1206 09:12:59.260225 397455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1206 09:12:59.760067 397455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1206 09:12:59.883777 397455 kubeadm.go:1114] duration metric: took 3.814314606s to wait for elevateKubeSystemPrivileges
I1206 09:12:59.883840 397455 kubeadm.go:403] duration metric: took 16.045655645s to StartCluster
I1206 09:12:59.883872 397455 settings.go:142] acquiring lock: {Name:mk6aea9c06de6b4df1ec2e5d18bffa62e8a405af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 09:12:59.884053 397455 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22047-392561/kubeconfig
I1206 09:12:59.884746 397455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/kubeconfig: {Name:mkde56684c6f903767a9ec1254dd48fbeb8e8b50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 09:12:59.884976 397455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1206 09:12:59.884993 397455 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1206 09:12:59.885079 397455 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1206 09:12:59.885194 397455 addons.go:70] Setting yakd=true in profile "addons-774690"
I1206 09:12:59.885223 397455 addons.go:70] Setting inspektor-gadget=true in profile "addons-774690"
I1206 09:12:59.885237 397455 addons.go:70] Setting metrics-server=true in profile "addons-774690"
I1206 09:12:59.885249 397455 addons.go:239] Setting addon inspektor-gadget=true in "addons-774690"
I1206 09:12:59.885258 397455 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-774690"
I1206 09:12:59.885276 397455 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-774690"
I1206 09:12:59.885273 397455 addons.go:70] Setting default-storageclass=true in profile "addons-774690"
I1206 09:12:59.885303 397455 host.go:66] Checking if "addons-774690" exists ...
I1206 09:12:59.885315 397455 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-774690"
I1206 09:12:59.885319 397455 addons.go:70] Setting registry-creds=true in profile "addons-774690"
I1206 09:12:59.885329 397455 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-774690"
I1206 09:12:59.885339 397455 addons.go:239] Setting addon registry-creds=true in "addons-774690"
I1206 09:12:59.885335 397455 config.go:182] Loaded profile config "addons-774690": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 09:12:59.885360 397455 addons.go:70] Setting ingress-dns=true in profile "addons-774690"
I1206 09:12:59.885365 397455 host.go:66] Checking if "addons-774690" exists ...
I1206 09:12:59.885371 397455 addons.go:239] Setting addon ingress-dns=true in "addons-774690"
I1206 09:12:59.885383 397455 addons.go:70] Setting volcano=true in profile "addons-774690"
I1206 09:12:59.885347 397455 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-774690"
I1206 09:12:59.885404 397455 addons.go:70] Setting volumesnapshots=true in profile "addons-774690"
I1206 09:12:59.885415 397455 addons.go:239] Setting addon volumesnapshots=true in "addons-774690"
I1206 09:12:59.885422 397455 host.go:66] Checking if "addons-774690" exists ...
I1206 09:12:59.885429 397455 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-774690"
I1206 09:12:59.885436 397455 host.go:66] Checking if "addons-774690" exists ...
I1206 09:12:59.885449 397455 host.go:66] Checking if "addons-774690" exists ...
I1206 09:12:59.885309 397455 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-774690"
I1206 09:12:59.885228 397455 addons.go:239] Setting addon yakd=true in "addons-774690"
I1206 09:12:59.885931 397455 host.go:66] Checking if "addons-774690" exists ...
I1206 09:12:59.885329 397455 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-774690"
I1206 09:12:59.886214 397455 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-774690"
I1206 09:12:59.886273 397455 host.go:66] Checking if "addons-774690" exists ...
I1206 09:12:59.886378 397455 addons.go:70] Setting gcp-auth=true in profile "addons-774690"
I1206 09:12:59.886400 397455 mustload.go:66] Loading cluster: addons-774690
I1206 09:12:59.886600 397455 config.go:182] Loaded profile config "addons-774690": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 09:12:59.885295 397455 addons.go:70] Setting storage-provisioner=true in profile "addons-774690"
I1206 09:12:59.886679 397455 addons.go:239] Setting addon storage-provisioner=true in "addons-774690"
I1206 09:12:59.886735 397455 host.go:66] Checking if "addons-774690" exists ...
I1206 09:12:59.885372 397455 host.go:66] Checking if "addons-774690" exists ...
I1206 09:12:59.885296 397455 addons.go:70] Setting registry=true in profile "addons-774690"
I1206 09:12:59.887075 397455 addons.go:239] Setting addon registry=true in "addons-774690"
I1206 09:12:59.885339 397455 addons.go:70] Setting cloud-spanner=true in profile "addons-774690"
I1206 09:12:59.887129 397455 host.go:66] Checking if "addons-774690" exists ...
I1206 09:12:59.887143 397455 addons.go:239] Setting addon cloud-spanner=true in "addons-774690"
I1206 09:12:59.887170 397455 host.go:66] Checking if "addons-774690" exists ...
I1206 09:12:59.885250 397455 addons.go:239] Setting addon metrics-server=true in "addons-774690"
I1206 09:12:59.887644 397455 host.go:66] Checking if "addons-774690" exists ...
I1206 09:12:59.887689 397455 out.go:179] * Verifying Kubernetes components...
I1206 09:12:59.885394 397455 addons.go:239] Setting addon volcano=true in "addons-774690"
I1206 09:12:59.887702 397455 addons.go:70] Setting ingress=true in profile "addons-774690"
I1206 09:12:59.887739 397455 addons.go:239] Setting addon ingress=true in "addons-774690"
I1206 09:12:59.887753 397455 host.go:66] Checking if "addons-774690" exists ...
I1206 09:12:59.887771 397455 host.go:66] Checking if "addons-774690" exists ...
I1206 09:12:59.889532 397455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1206 09:12:59.893518 397455 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
I1206 09:12:59.893598 397455 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1206 09:12:59.893526 397455 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
I1206 09:12:59.893648 397455 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1206 09:12:59.893681 397455 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-774690"
I1206 09:12:59.894903 397455 host.go:66] Checking if "addons-774690" exists ...
I1206 09:12:59.894965 397455 addons.go:239] Setting addon default-storageclass=true in "addons-774690"
I1206 09:12:59.895006 397455 host.go:66] Checking if "addons-774690" exists ...
I1206 09:12:59.895530 397455 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1206 09:12:59.895557 397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1206 09:12:59.895612 397455 host.go:66] Checking if "addons-774690" exists ...
I1206 09:12:59.895539 397455 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
I1206 09:12:59.895632 397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
I1206 09:12:59.895542 397455 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1206 09:12:59.895821 397455 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1206 09:12:59.896278 397455 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
I1206 09:12:59.897205 397455 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
I1206 09:12:59.897217 397455 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1206 09:12:59.897218 397455 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1206 09:12:59.897209 397455 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.5
I1206 09:12:59.897208 397455 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
W1206 09:12:59.897676 397455 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1206 09:12:59.898084 397455 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1206 09:12:59.898119 397455 out.go:179] - Using image docker.io/registry:3.0.0
I1206 09:12:59.898124 397455 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
I1206 09:12:59.898147 397455 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1206 09:12:59.899393 397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1206 09:12:59.899057 397455 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
I1206 09:12:59.899800 397455 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1206 09:12:59.899956 397455 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1206 09:12:59.899980 397455 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1206 09:12:59.899997 397455 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1206 09:12:59.899112 397455 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1206 09:12:59.900580 397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1206 09:12:59.900069 397455 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1206 09:12:59.900699 397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1206 09:12:59.900070 397455 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1206 09:12:59.900782 397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1206 09:12:59.900133 397455 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
I1206 09:12:59.900817 397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1206 09:12:59.901219 397455 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1206 09:12:59.902003 397455 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1206 09:12:59.902009 397455 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1206 09:12:59.902027 397455 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1206 09:12:59.902882 397455 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
I1206 09:12:59.902893 397455 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1206 09:12:59.903642 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.904543 397455 out.go:179] - Using image docker.io/busybox:stable
I1206 09:12:59.904662 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.904692 397455 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
I1206 09:12:59.905016 397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1206 09:12:59.905333 397455 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1206 09:12:59.905760 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:59.906094 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.906335 397455 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
I1206 09:12:59.906357 397455 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1206 09:12:59.906824 397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1206 09:12:59.906950 397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
I1206 09:12:59.906693 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:59.907199 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.907887 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.907957 397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
I1206 09:12:59.909424 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:59.909461 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.909754 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.910029 397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
I1206 09:12:59.910554 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.911332 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:59.911344 397455 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1206 09:12:59.911438 397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1206 09:12:59.911620 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.911411 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.911967 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.912103 397455 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1206 09:12:59.912160 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.912358 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:59.912393 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.912517 397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
I1206 09:12:59.912672 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.913186 397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
I1206 09:12:59.913245 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:59.913272 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.913533 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:59.913570 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.913724 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.913895 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:59.913933 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.914045 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.914131 397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
I1206 09:12:59.914402 397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
I1206 09:12:59.914429 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:59.914459 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.914694 397455 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1206 09:12:59.914769 397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
I1206 09:12:59.914982 397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
I1206 09:12:59.915311 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:59.915382 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:59.915407 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.915412 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.915695 397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
I1206 09:12:59.915702 397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
I1206 09:12:59.915913 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.916494 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:59.916527 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.916674 397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
I1206 09:12:59.916813 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.917218 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:59.917252 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.917359 397455 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1206 09:12:59.917451 397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
I1206 09:12:59.917896 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.918336 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:59.918365 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.918505 397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
I1206 09:12:59.920298 397455 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1206 09:12:59.921739 397455 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1206 09:12:59.921759 397455 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1206 09:12:59.924322 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.924779 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:12:59.924805 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:12:59.924965 397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
W1206 09:13:00.122937 397455 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60920->192.168.39.249:22: read: connection reset by peer
I1206 09:13:00.122978 397455 retry.go:31] will retry after 175.899567ms: ssh: handshake failed: read tcp 192.168.39.1:60920->192.168.39.249:22: read: connection reset by peer
W1206 09:13:00.123042 397455 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60930->192.168.39.249:22: read: connection reset by peer
I1206 09:13:00.123047 397455 retry.go:31] will retry after 182.601016ms: ssh: handshake failed: read tcp 192.168.39.1:60930->192.168.39.249:22: read: connection reset by peer
I1206 09:13:00.273021 397455 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1206 09:13:00.273105 397455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1206 09:13:00.360334 397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1206 09:13:00.404407 397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1206 09:13:00.459531 397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1206 09:13:00.460504 397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1206 09:13:00.485891 397455 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
I1206 09:13:00.485934 397455 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1206 09:13:00.547022 397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1206 09:13:00.556989 397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1206 09:13:00.563771 397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1206 09:13:00.575670 397455 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1206 09:13:00.575691 397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1206 09:13:00.579135 397455 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1206 09:13:00.579151 397455 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1206 09:13:00.583643 397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
I1206 09:13:00.632301 397455 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
I1206 09:13:00.632331 397455 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1206 09:13:00.763630 397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1206 09:13:00.842518 397455 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1206 09:13:00.842558 397455 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1206 09:13:00.857985 397455 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
I1206 09:13:00.858016 397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1206 09:13:00.939020 397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1206 09:13:00.973678 397455 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
I1206 09:13:00.973730 397455 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1206 09:13:01.018413 397455 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1206 09:13:01.018490 397455 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1206 09:13:01.150159 397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1206 09:13:01.163543 397455 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1206 09:13:01.163619 397455 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1206 09:13:01.492871 397455 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1206 09:13:01.492906 397455 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1206 09:13:01.697315 397455 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
I1206 09:13:01.697348 397455 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1206 09:13:01.855900 397455 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1206 09:13:01.855928 397455 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1206 09:13:01.891651 397455 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1206 09:13:01.891684 397455 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1206 09:13:02.102737 397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1206 09:13:02.509928 397455 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
I1206 09:13:02.509959 397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1206 09:13:02.596588 397455 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1206 09:13:02.596637 397455 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1206 09:13:02.601077 397455 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1206 09:13:02.601113 397455 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1206 09:13:03.113486 397455 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1206 09:13:03.113526 397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1206 09:13:03.143545 397455 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1206 09:13:03.143583 397455 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1206 09:13:03.143747 397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1206 09:13:03.409242 397455 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1206 09:13:03.409274 397455 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1206 09:13:03.458076 397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1206 09:13:04.015392 397455 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1206 09:13:04.015418 397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1206 09:13:04.025004 397455 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.7518581s)
I1206 09:13:04.025060 397455 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1206 09:13:04.025104 397455 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.752046746s)
I1206 09:13:04.025166 397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.664794059s)
I1206 09:13:04.025995 397455 node_ready.go:35] waiting up to 6m0s for node "addons-774690" to be "Ready" ...
I1206 09:13:04.035138 397455 node_ready.go:49] node "addons-774690" is "Ready"
I1206 09:13:04.035170 397455 node_ready.go:38] duration metric: took 9.143113ms for node "addons-774690" to be "Ready" ...
I1206 09:13:04.035185 397455 api_server.go:52] waiting for apiserver process to appear ...
I1206 09:13:04.035233 397455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1206 09:13:04.281009 397455 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1206 09:13:04.281044 397455 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1206 09:13:04.544358 397455 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-774690" context rescaled to 1 replicas
I1206 09:13:04.618647 397455 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1206 09:13:04.618672 397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1206 09:13:04.711020 397455 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1206 09:13:04.711041 397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1206 09:13:04.833043 397455 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1206 09:13:04.833072 397455 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1206 09:13:05.253178 397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1206 09:13:06.383133 397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.978682334s)
I1206 09:13:07.418825 397455 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1206 09:13:07.421732 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:13:07.422152 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:13:07.422191 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:13:07.422343 397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
I1206 09:13:07.698688 397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.239105307s)
I1206 09:13:07.698776 397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.23824231s)
I1206 09:13:07.698821 397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.151758924s)
I1206 09:13:07.698927 397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.141901203s)
I1206 09:13:07.698973 397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.135167948s)
W1206 09:13:07.827335 397455 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I1206 09:13:07.830544 397455 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1206 09:13:07.952723 397455 addons.go:239] Setting addon gcp-auth=true in "addons-774690"
I1206 09:13:07.952808 397455 host.go:66] Checking if "addons-774690" exists ...
I1206 09:13:07.954704 397455 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1206 09:13:07.957383 397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:13:07.957831 397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
I1206 09:13:07.957855 397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
I1206 09:13:07.958039 397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
I1206 09:13:08.053226 397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (7.469550845s)
I1206 09:13:08.053330 397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.289659335s)
I1206 09:13:09.923981 397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.984907982s)
I1206 09:13:09.924043 397455 addons.go:495] Verifying addon ingress=true in "addons-774690"
I1206 09:13:09.924066 397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.773856843s)
I1206 09:13:09.924096 397455 addons.go:495] Verifying addon registry=true in "addons-774690"
I1206 09:13:09.924127 397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.821359574s)
I1206 09:13:09.924220 397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.780442362s)
I1206 09:13:09.924225 397455 addons.go:495] Verifying addon metrics-server=true in "addons-774690"
I1206 09:13:09.924368 397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.466253199s)
W1206 09:13:09.924405 397455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1206 09:13:09.924437 397455 retry.go:31] will retry after 283.145717ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1206 09:13:09.924449 397455 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.889196408s)
I1206 09:13:09.924484 397455 api_server.go:72] duration metric: took 10.039466985s to wait for apiserver process to appear ...
I1206 09:13:09.924496 397455 api_server.go:88] waiting for apiserver healthz status ...
I1206 09:13:09.924520 397455 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
I1206 09:13:09.925953 397455 out.go:179] * Verifying ingress addon...
I1206 09:13:09.926968 397455 out.go:179] * Verifying registry addon...
I1206 09:13:09.926967 397455 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-774690 service yakd-dashboard -n yakd-dashboard
I1206 09:13:09.929115 397455 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1206 09:13:09.930609 397455 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1206 09:13:09.965796 397455 api_server.go:279] https://192.168.39.249:8443/healthz returned 200:
ok
I1206 09:13:09.966231 397455 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1206 09:13:09.966245 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:09.966535 397455 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1206 09:13:09.966556 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:09.970498 397455 api_server.go:141] control plane version: v1.34.2
I1206 09:13:09.970534 397455 api_server.go:131] duration metric: took 46.031492ms to wait for apiserver health ...
I1206 09:13:09.970544 397455 system_pods.go:43] waiting for kube-system pods to appear ...
I1206 09:13:09.985945 397455 system_pods.go:59] 17 kube-system pods found
I1206 09:13:09.985990 397455 system_pods.go:61] "amd-gpu-device-plugin-svq5h" [ff554a2a-f7e8-4581-b0cc-821075d441f9] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1206 09:13:09.986002 397455 system_pods.go:61] "coredns-66bc5c9577-l9grt" [3c33d79c-6db7-4610-b394-d2b81216197d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1206 09:13:09.986019 397455 system_pods.go:61] "coredns-66bc5c9577-sgm5h" [0e85b90c-8f6b-4208-8699-b3dc97355093] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1206 09:13:09.986025 397455 system_pods.go:61] "etcd-addons-774690" [034cb1f2-61eb-401d-8bd4-dc4065130f57] Running
I1206 09:13:09.986031 397455 system_pods.go:61] "kube-apiserver-addons-774690" [4b7b72ad-0e63-49b0-bcd7-2027061e77e7] Running
I1206 09:13:09.986036 397455 system_pods.go:61] "kube-controller-manager-addons-774690" [045b5ffd-5313-43cc-8751-0d3927a9dd20] Running
I1206 09:13:09.986044 397455 system_pods.go:61] "kube-ingress-dns-minikube" [4117e868-9c8a-440e-9af2-45709b4fbdc3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1206 09:13:09.986049 397455 system_pods.go:61] "kube-proxy-jzp4f" [df1c8ffd-d67f-46c3-aec5-6a7b099bce49] Running
I1206 09:13:09.986055 397455 system_pods.go:61] "kube-scheduler-addons-774690" [105e520d-94c8-47b5-958a-679d16b36726] Running
I1206 09:13:09.986063 397455 system_pods.go:61] "metrics-server-85b7d694d7-clrcl" [34e1f363-ac29-415d-89c3-bfe4ac513e1f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1206 09:13:09.986074 397455 system_pods.go:61] "nvidia-device-plugin-daemonset-vdltq" [6bd89c20-b241-4230-9f16-b5904f3e8fd6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1206 09:13:09.986080 397455 system_pods.go:61] "registry-6b586f9694-4gkjr" [0b1de7e3-a280-4a46-a545-e46a47e746b0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1206 09:13:09.986092 397455 system_pods.go:61] "registry-creds-764b6fb674-m55kh" [40957898-1473-4039-aeb6-a7ece80be295] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1206 09:13:09.986101 397455 system_pods.go:61] "registry-proxy-t6flj" [50457566-2e31-43a8-9fba-b01c71f057b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1206 09:13:09.986110 397455 system_pods.go:61] "snapshot-controller-7d9fbc56b8-nld6r" [c9b5706d-2f99-4a13-aa27-d1cd48aa900b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1206 09:13:09.986118 397455 system_pods.go:61] "snapshot-controller-7d9fbc56b8-nmfz4" [51917eb2-3eac-4a48-9c5d-7f87daa63579] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1206 09:13:09.986127 397455 system_pods.go:61] "storage-provisioner" [d85c1bd3-4a0c-4397-9c7d-4cb74f18e187] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1206 09:13:09.986135 397455 system_pods.go:74] duration metric: took 15.584121ms to wait for pod list to return data ...
I1206 09:13:09.986149 397455 default_sa.go:34] waiting for default service account to be created ...
I1206 09:13:10.003134 397455 default_sa.go:45] found service account: "default"
I1206 09:13:10.003161 397455 default_sa.go:55] duration metric: took 17.006599ms for default service account to be created ...
I1206 09:13:10.003171 397455 system_pods.go:116] waiting for k8s-apps to be running ...
I1206 09:13:10.081935 397455 system_pods.go:86] 17 kube-system pods found
I1206 09:13:10.081972 397455 system_pods.go:89] "amd-gpu-device-plugin-svq5h" [ff554a2a-f7e8-4581-b0cc-821075d441f9] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1206 09:13:10.081980 397455 system_pods.go:89] "coredns-66bc5c9577-l9grt" [3c33d79c-6db7-4610-b394-d2b81216197d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1206 09:13:10.081989 397455 system_pods.go:89] "coredns-66bc5c9577-sgm5h" [0e85b90c-8f6b-4208-8699-b3dc97355093] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1206 09:13:10.081993 397455 system_pods.go:89] "etcd-addons-774690" [034cb1f2-61eb-401d-8bd4-dc4065130f57] Running
I1206 09:13:10.081999 397455 system_pods.go:89] "kube-apiserver-addons-774690" [4b7b72ad-0e63-49b0-bcd7-2027061e77e7] Running
I1206 09:13:10.082002 397455 system_pods.go:89] "kube-controller-manager-addons-774690" [045b5ffd-5313-43cc-8751-0d3927a9dd20] Running
I1206 09:13:10.082008 397455 system_pods.go:89] "kube-ingress-dns-minikube" [4117e868-9c8a-440e-9af2-45709b4fbdc3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1206 09:13:10.082011 397455 system_pods.go:89] "kube-proxy-jzp4f" [df1c8ffd-d67f-46c3-aec5-6a7b099bce49] Running
I1206 09:13:10.082015 397455 system_pods.go:89] "kube-scheduler-addons-774690" [105e520d-94c8-47b5-958a-679d16b36726] Running
I1206 09:13:10.082020 397455 system_pods.go:89] "metrics-server-85b7d694d7-clrcl" [34e1f363-ac29-415d-89c3-bfe4ac513e1f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1206 09:13:10.082025 397455 system_pods.go:89] "nvidia-device-plugin-daemonset-vdltq" [6bd89c20-b241-4230-9f16-b5904f3e8fd6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1206 09:13:10.082034 397455 system_pods.go:89] "registry-6b586f9694-4gkjr" [0b1de7e3-a280-4a46-a545-e46a47e746b0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1206 09:13:10.082042 397455 system_pods.go:89] "registry-creds-764b6fb674-m55kh" [40957898-1473-4039-aeb6-a7ece80be295] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1206 09:13:10.082046 397455 system_pods.go:89] "registry-proxy-t6flj" [50457566-2e31-43a8-9fba-b01c71f057b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1206 09:13:10.082052 397455 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nld6r" [c9b5706d-2f99-4a13-aa27-d1cd48aa900b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1206 09:13:10.082060 397455 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nmfz4" [51917eb2-3eac-4a48-9c5d-7f87daa63579] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1206 09:13:10.082065 397455 system_pods.go:89] "storage-provisioner" [d85c1bd3-4a0c-4397-9c7d-4cb74f18e187] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1206 09:13:10.082074 397455 system_pods.go:126] duration metric: took 78.896787ms to wait for k8s-apps to be running ...
I1206 09:13:10.082082 397455 system_svc.go:44] waiting for kubelet service to be running ....
I1206 09:13:10.082134 397455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1206 09:13:10.208028 397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1206 09:13:10.453451 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:10.464841 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:10.833442 397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.580203929s)
I1206 09:13:10.833487 397455 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-774690"
I1206 09:13:10.833520 397455 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.878764926s)
I1206 09:13:10.833583 397455 system_svc.go:56] duration metric: took 751.493295ms WaitForService to wait for kubelet
I1206 09:13:10.833657 397455 kubeadm.go:587] duration metric: took 10.948634384s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1206 09:13:10.833682 397455 node_conditions.go:102] verifying NodePressure condition ...
I1206 09:13:10.835084 397455 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
I1206 09:13:10.835089 397455 out.go:179] * Verifying csi-hostpath-driver addon...
I1206 09:13:10.836527 397455 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1206 09:13:10.837103 397455 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1206 09:13:10.838023 397455 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1206 09:13:10.838041 397455 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1206 09:13:10.884334 397455 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1206 09:13:10.884372 397455 node_conditions.go:123] node cpu capacity is 2
I1206 09:13:10.884394 397455 node_conditions.go:105] duration metric: took 50.706247ms to run NodePressure ...
I1206 09:13:10.884412 397455 start.go:242] waiting for startup goroutines ...
I1206 09:13:10.884791 397455 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1206 09:13:10.884814 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:10.935048 397455 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1206 09:13:10.935075 397455 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1206 09:13:10.959927 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:10.960804 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:11.019995 397455 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1206 09:13:11.020021 397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1206 09:13:11.128849 397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1206 09:13:11.353635 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:11.454088 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:11.454913 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:11.844527 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:11.937467 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:11.939243 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:12.183572 397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.975483981s)
I1206 09:13:12.362367 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:12.463525 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:12.465243 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:12.644464 397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.51554942s)
I1206 09:13:12.645693 397455 addons.go:495] Verifying addon gcp-auth=true in "addons-774690"
I1206 09:13:12.647462 397455 out.go:179] * Verifying gcp-auth addon...
I1206 09:13:12.649999 397455 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1206 09:13:12.664141 397455 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1206 09:13:12.664164 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:12.842946 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:12.944351 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:12.944642 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:13.154573 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:13.342399 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:13.433875 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:13.437678 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:13.657220 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:13.841914 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:13.936545 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:13.937140 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:14.155091 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:14.342234 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:14.436445 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:14.440494 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:14.654894 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:14.844992 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:14.941072 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:14.944001 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:15.157755 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:15.341254 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:15.434388 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:15.435385 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:15.656059 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:15.844470 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:15.934566 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:15.935986 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:16.155344 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:16.341146 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:16.435318 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:16.437469 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:16.653920 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:16.843354 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:16.944302 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:16.944535 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:17.155368 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:17.341266 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:17.433510 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:17.434095 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:17.653622 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:17.841178 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:17.933380 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:17.934632 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:18.155823 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:18.341668 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:18.432873 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:18.434911 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:18.654384 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:18.850890 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:18.935055 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:18.937443 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:19.155333 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:19.343012 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:19.433738 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:19.436739 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:19.656734 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:19.842522 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:19.933071 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:19.937980 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:20.153505 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:20.342172 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:20.435439 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:20.435450 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:20.655757 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:20.842669 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:20.934149 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:20.937035 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:21.156829 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:21.343321 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:21.434421 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:21.435556 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:21.653389 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:21.841010 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:21.943329 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:21.945941 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:22.154252 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:22.342212 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:22.434233 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:22.436020 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:22.654204 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:22.841083 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:22.934282 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:22.934316 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:23.154928 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:23.341922 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:23.433299 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:23.434357 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:23.653585 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:23.841042 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:23.933804 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:23.936535 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:24.157689 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:24.341759 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:24.433537 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:24.434235 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:24.654804 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:24.844232 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:24.935641 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:24.937079 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:25.156488 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:25.343562 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:25.432651 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:25.436777 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:25.657236 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:25.840491 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:25.932300 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:25.935655 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:26.157124 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:26.341449 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:26.433830 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:26.439218 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:26.655212 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:26.841409 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:26.937972 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:26.938002 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:27.154065 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:27.340746 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:27.433761 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:27.434572 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:27.654305 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:27.842533 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:27.932737 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:27.935629 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:28.159414 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:28.340654 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:28.435049 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:28.436446 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:28.657604 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:28.843426 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:28.932925 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:28.937016 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:29.153938 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:29.342945 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:29.436165 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:29.436972 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:29.653276 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:29.841058 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:29.934940 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:29.936591 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:30.156434 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:30.350184 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:30.436325 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:30.436509 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:30.655073 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:30.841239 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:30.932912 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:30.936911 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:31.328104 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:31.512626 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:31.512896 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:31.514843 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:31.655408 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:31.842650 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:31.936560 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:31.938777 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:32.155781 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:32.342139 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:32.433866 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:32.436661 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:32.654426 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:32.841583 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:32.934281 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:32.935303 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:33.154816 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:33.342374 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:33.441312 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:33.441679 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:33.654282 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:33.840530 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:33.932253 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:33.934146 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:34.153572 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:34.341201 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:34.435867 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:34.436100 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:34.653878 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:34.844037 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:35.088587 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:35.089962 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:35.166941 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:35.343119 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:35.436644 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:35.437379 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:35.658258 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:35.841405 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:35.932975 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:35.935339 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:36.157666 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:36.342304 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:36.435836 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:36.435873 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:36.655493 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:36.841903 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:36.942797 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:36.943206 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:37.154007 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:37.341988 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:37.433819 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:37.435500 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:37.655102 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:37.840755 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:37.933857 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:37.934809 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:38.156357 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:38.341230 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:38.433536 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:38.435509 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:38.654085 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:38.840230 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:38.934394 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:38.934898 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:39.154481 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:39.345695 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:39.438043 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:39.439582 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:39.655939 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:39.843837 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:39.934407 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:39.934799 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:40.155882 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:40.342439 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:40.434552 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:40.436513 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1206 09:13:40.656856 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:40.843014 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:40.942120 397455 kapi.go:107] duration metric: took 31.011508655s to wait for kubernetes.io/minikube-addons=registry ...
I1206 09:13:40.942173 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:41.153358 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:41.341474 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:41.435387 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:41.654296 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:41.842232 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:41.933430 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:42.155127 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:42.342231 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:42.433875 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:42.654448 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:42.841923 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:42.934580 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:43.157894 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:43.342479 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:43.442287 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:43.653841 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:43.841968 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:43.933453 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:44.155939 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:44.341881 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:44.434232 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:44.653984 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:44.842379 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:44.933274 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:45.158434 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:45.343268 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:45.435036 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:45.654268 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:45.843601 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:45.937531 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:46.158633 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:46.343197 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:46.434383 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:46.657622 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:46.841546 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:46.933744 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:47.155295 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:47.340726 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:47.433147 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:47.679775 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:47.844215 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:47.932229 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:48.153335 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:48.343412 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:48.442844 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:48.654193 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:48.841737 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:48.933431 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:49.153544 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:49.341096 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:49.433393 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:49.653674 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:49.843026 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:49.933149 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:50.152820 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:50.341510 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:50.434798 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:50.653790 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:50.840694 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:50.934067 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:51.153791 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:51.343894 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:51.433814 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:51.657878 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:51.844981 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:51.935583 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:52.157413 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:52.343369 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:52.433631 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:52.654907 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:52.842794 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:52.933998 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:53.158377 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:53.341420 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:53.432910 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:53.654289 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:53.848496 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:53.947956 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:54.157275 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:54.341620 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:54.445764 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:54.654583 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:54.858389 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:54.933061 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:55.154206 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:55.344563 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:55.433949 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:55.655056 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:55.845035 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:55.934401 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:56.154361 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:56.356851 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:56.434550 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:56.655674 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:56.845028 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:56.936071 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:57.153645 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:57.342240 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:57.434158 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:57.653725 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:57.842251 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:57.932633 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:58.156431 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:58.344070 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:58.436446 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:58.654404 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:58.842091 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:58.933769 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:59.154651 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:59.341162 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:59.433385 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:13:59.654822 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:13:59.855120 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:13:59.941926 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:00.157986 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:00.342759 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:14:00.432512 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:00.654306 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:00.852249 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:14:00.936314 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:01.156835 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:01.346521 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:14:01.432419 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:01.657528 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:01.930220 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:14:01.933339 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:02.156085 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:02.341440 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:14:02.432869 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:02.653811 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:02.842861 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:14:02.936761 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:03.157562 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:03.345683 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:14:03.527854 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:03.655379 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:03.841313 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:14:03.933469 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:04.158411 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:04.341909 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:14:04.434907 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:04.654697 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:04.841413 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:14:04.934139 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:05.155216 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:05.345325 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:14:05.445627 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:05.655869 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:05.842607 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:14:05.933955 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:06.160457 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:06.341394 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:14:06.434638 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:06.654922 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:06.841890 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:14:06.941947 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:07.156155 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:07.341351 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:14:07.433703 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:07.654704 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:07.844341 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:14:07.932338 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:08.165063 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:08.343552 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:14:08.432244 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:08.654382 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:08.841443 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:14:08.934002 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:09.155938 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:09.345005 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:14:09.444647 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:09.657571 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:09.840979 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:14:09.934835 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:10.410416 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:10.411382 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:14:10.433283 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:10.653418 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:10.841640 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:14:10.932769 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:11.157857 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:11.341519 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:14:11.432818 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:11.654184 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:11.840821 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:14:11.933753 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:12.155549 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:12.341471 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:14:12.433819 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:12.654870 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:12.841100 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1206 09:14:12.933642 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:13.153722 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:13.341960 397455 kapi.go:107] duration metric: took 1m2.504850074s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1206 09:14:13.433237 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:13.654464 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:13.934770 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:14.154345 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:14.433829 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:14.656438 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:14.933533 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:15.154507 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:15.434111 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:15.655691 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:15.934680 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:16.156669 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:16.436962 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:16.662317 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:16.934638 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:17.156462 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:17.438444 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:17.662505 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:17.936294 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:18.359679 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:18.433774 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:18.653860 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:18.934972 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:19.157180 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:19.614949 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:19.716171 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:19.933418 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:20.156122 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:20.435747 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:20.656787 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:20.933784 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:21.159149 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:21.433620 397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1206 09:14:21.654380 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:21.933098 397455 kapi.go:107] duration metric: took 1m12.003988282s to wait for app.kubernetes.io/name=ingress-nginx ...
I1206 09:14:22.153481 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:22.654700 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:23.153403 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:23.654757 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:24.155166 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:24.656323 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:25.155051 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:25.656766 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:26.156682 397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1206 09:14:26.654063 397455 kapi.go:107] duration metric: took 1m14.004061087s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1206 09:14:26.655800 397455 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-774690 cluster.
I1206 09:14:26.657177 397455 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1206 09:14:26.658365 397455 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1206 09:14:26.659818 397455 out.go:179] * Enabled addons: registry-creds, storage-provisioner, cloud-spanner, ingress-dns, amd-gpu-device-plugin, storage-provisioner-rancher, inspektor-gadget, nvidia-device-plugin, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I1206 09:14:26.661096 397455 addons.go:530] duration metric: took 1m26.776022692s for enable addons: enabled=[registry-creds storage-provisioner cloud-spanner ingress-dns amd-gpu-device-plugin storage-provisioner-rancher inspektor-gadget nvidia-device-plugin metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I1206 09:14:26.661145 397455 start.go:247] waiting for cluster config update ...
I1206 09:14:26.661168 397455 start.go:256] writing updated cluster config ...
I1206 09:14:26.661487 397455 ssh_runner.go:195] Run: rm -f paused
I1206 09:14:26.668181 397455 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1206 09:14:26.673234 397455 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-l9grt" in "kube-system" namespace to be "Ready" or be gone ...
I1206 09:14:26.681163 397455 pod_ready.go:94] pod "coredns-66bc5c9577-l9grt" is "Ready"
I1206 09:14:26.681211 397455 pod_ready.go:86] duration metric: took 7.944214ms for pod "coredns-66bc5c9577-l9grt" in "kube-system" namespace to be "Ready" or be gone ...
I1206 09:14:26.684186 397455 pod_ready.go:83] waiting for pod "etcd-addons-774690" in "kube-system" namespace to be "Ready" or be gone ...
I1206 09:14:26.689754 397455 pod_ready.go:94] pod "etcd-addons-774690" is "Ready"
I1206 09:14:26.689788 397455 pod_ready.go:86] duration metric: took 5.579272ms for pod "etcd-addons-774690" in "kube-system" namespace to be "Ready" or be gone ...
I1206 09:14:26.691762 397455 pod_ready.go:83] waiting for pod "kube-apiserver-addons-774690" in "kube-system" namespace to be "Ready" or be gone ...
I1206 09:14:26.697701 397455 pod_ready.go:94] pod "kube-apiserver-addons-774690" is "Ready"
I1206 09:14:26.697741 397455 pod_ready.go:86] duration metric: took 5.961081ms for pod "kube-apiserver-addons-774690" in "kube-system" namespace to be "Ready" or be gone ...
I1206 09:14:26.704301 397455 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-774690" in "kube-system" namespace to be "Ready" or be gone ...
I1206 09:14:27.073578 397455 pod_ready.go:94] pod "kube-controller-manager-addons-774690" is "Ready"
I1206 09:14:27.073608 397455 pod_ready.go:86] duration metric: took 369.279767ms for pod "kube-controller-manager-addons-774690" in "kube-system" namespace to be "Ready" or be gone ...
I1206 09:14:27.274390 397455 pod_ready.go:83] waiting for pod "kube-proxy-jzp4f" in "kube-system" namespace to be "Ready" or be gone ...
I1206 09:14:27.674174 397455 pod_ready.go:94] pod "kube-proxy-jzp4f" is "Ready"
I1206 09:14:27.674209 397455 pod_ready.go:86] duration metric: took 399.791957ms for pod "kube-proxy-jzp4f" in "kube-system" namespace to be "Ready" or be gone ...
I1206 09:14:27.873006 397455 pod_ready.go:83] waiting for pod "kube-scheduler-addons-774690" in "kube-system" namespace to be "Ready" or be gone ...
I1206 09:14:28.273339 397455 pod_ready.go:94] pod "kube-scheduler-addons-774690" is "Ready"
I1206 09:14:28.273368 397455 pod_ready.go:86] duration metric: took 400.335134ms for pod "kube-scheduler-addons-774690" in "kube-system" namespace to be "Ready" or be gone ...
I1206 09:14:28.273380 397455 pod_ready.go:40] duration metric: took 1.60514786s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1206 09:14:28.320968 397455 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
I1206 09:14:28.322740 397455 out.go:179] * Done! kubectl is now configured to use "addons-774690" cluster and "default" namespace by default
==> CRI-O <==
Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.181767409Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765012659181742096,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585488,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=239eed1c-6e60-42df-806b-1bc9bded394a name=/runtime.v1.ImageService/ImageFsInfo
Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.182875640Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01816479-5c92-4364-833d-f63e3df18f53 name=/runtime.v1.RuntimeService/ListContainers
Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.182934683Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01816479-5c92-4364-833d-f63e3df18f53 name=/runtime.v1.RuntimeService/ListContainers
Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.183229731Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e1cee18d79c87bdb4fdcf0e2d5c674f013ebf86580772154072e1ed786f7ed7,PodSandboxId:71fcc4f6cb756a525536deb6b9d97220e091c16afb8f0ca488d3de14c216af5a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765012518156891547,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea010f3e-0b70-4331-8ef2-e8dbeb8da0dd,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3b54b242a4dbc9a6b8480824017e9c9c6efa05164c2e351137352cb17cd6cc,PodSandboxId:ffb5e4f0851d0f7a56808138790d5472437aeb9761b487e1720f6c2db147a419,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765012472828861850,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ccc10db2-3a00-4383-80ab-805fd3af8161,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6f65a7e0016613d05aaf2daae13186f051dd1e1e72fc6802d5acdd53421dea,PodSandboxId:8afa3f705b0c6e4feab3450eb8883f9cbb51b27fb57059af1873c5b0173db425,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1765012461241906908,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-cghl5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da20bd88-903b-4cb3-bfa2-e07ba41ddf78,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5032adb8f732b213034bf5c01beb4d8a43caf71af8b71077d79ed631659e35d8,PodSandboxId:196c96f53ebd7eca5c62ef767a5585d3332a9690fb78d1a9c5753662a96715b7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765012434467670209,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wjfhp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e654e7d8-13a2-47be-a4f3-2e26e6350997,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c851074b8cff4771f8483139eed4b1df8fd501a563dd6576daaaa92457d4bd4a,PodSandboxId:1e9c8ab1f4f0a62113356cd2c2f5dbdb22a363ce60276bd95f12a6ac531365ba,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765012434346239312,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4c946,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: be11f3da-9722-453a-835b-e18b8f03516c,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476a9346a94d5e97321a7dad3b910fafc5ddb361d5194546e8b7203e9348e5ea,PodSandboxId:c7bb2700615f9b21f09b213f2165bf5bf18924a2725583d502f86217d81cb694,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765012412119893295,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4117e868-9c8a-440e-9af2-45709b4fbdc3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35444c80ca7de2189bace6e1fa56dbf377fff82be49890c85053e4d3183ce534,PodSandboxId:3acadb9fd10d100d5e337ab828043990d404ee560df077b0eaff596bf8c88e82,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765012389336047518,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-svq5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff554a2a-f7e8-4581-b0cc-821075d441f9,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194a16bb3f558c9e566d6dd9d5d4d7ad1544b1cfd117ed99a25359e961ff291f,PodSandboxId:31e85e1cb6b9807622c540332611d99b9261a6273e7058205a0ba0292d86a79f,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765012388959116107,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d85c1bd3-4a0c-4397-9c7d-4cb74f18e187,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c52ec85be8f0804ab5cfb12ca329e31a05a691124b480be6dad48aaf8b57dd5d,PodSandboxId:fe6d9f5f0c70386cc189f4d1509d794f0ed1542d0a663567fec6acbf84c47c3c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765012382415682102,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-l9grt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c33d79c-6db7-4610-b394-d2b81216197d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14cd309852195096b88efe9d3a347c387db1c0468ac7422480066369d7c24144,PodSandboxId:d90dd60e67b985e3e6869abab033af7459d9a60035ae735e6a1da4afeef2f574,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765012381813311355,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jzp4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df1c8ffd-d67f-46c3-aec5-6a7b099bce49,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cbfd50881df9ebf5b3ff65c7307461fe5f134037643baf679f5a2991aec5829,PodSandboxId:ab48cac50ef3adebb02fcd7be63a03640d303a6d5f911d3a50d60bbaae6e3d70,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765012368917345556,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 549fd7c125f874ea8194dda0339bd0ad,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d60b33a68a97763da81ea2c5b36356d161a454f9fdaedaedda9d6770b3a441c5,PodSandboxId:69abe07fabddb33f62a7450189eedce6dc9ae410a2aca409985fc6a444f396d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765012368908698593,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c042aa351a5b570e306966a6f284a804,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-p
ort\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cafa29f45be57922934aa2804adc6c03cfd405657efff21b020a18542e39b78,PodSandboxId:c3fa7030d6163b5a793cbf96f4803e4b43dfe48b945917fe7b354987e20ca53e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765012368861531869,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df90b68ae6a59daaf09af3b96ff025b7,},Annotations:map[
string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897c83e9715cf47989d744826a723d32ae6a225303a4d9621d6cb1b373e84ebb,PodSandboxId:b03ab93a9ca6007af5cfc2bf48cdead893b9ed565c7c7a9e99e2b0374799ef1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765012368854172850,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manage
r-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054878d143440cf1165e963a55f38038,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=01816479-5c92-4364-833d-f63e3df18f53 name=/runtime.v1.RuntimeService/ListContainers
Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.218609594Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a1231e95-2d62-4ec0-861f-7fcd3e8bee11 name=/runtime.v1.RuntimeService/Version
Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.218694527Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a1231e95-2d62-4ec0-861f-7fcd3e8bee11 name=/runtime.v1.RuntimeService/Version
Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.220655977Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9e430b55-9425-4cd5-94b3-74cf18a9bc8a name=/runtime.v1.ImageService/ImageFsInfo
Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.221925855Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765012659221895773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585488,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e430b55-9425-4cd5-94b3-74cf18a9bc8a name=/runtime.v1.ImageService/ImageFsInfo
Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.223187237Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8836c094-defa-463e-a0c9-0d0b0b5d0f0a name=/runtime.v1.RuntimeService/ListContainers
Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.223301871Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8836c094-defa-463e-a0c9-0d0b0b5d0f0a name=/runtime.v1.RuntimeService/ListContainers
Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.223673975Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e1cee18d79c87bdb4fdcf0e2d5c674f013ebf86580772154072e1ed786f7ed7,PodSandboxId:71fcc4f6cb756a525536deb6b9d97220e091c16afb8f0ca488d3de14c216af5a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765012518156891547,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea010f3e-0b70-4331-8ef2-e8dbeb8da0dd,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3b54b242a4dbc9a6b8480824017e9c9c6efa05164c2e351137352cb17cd6cc,PodSandboxId:ffb5e4f0851d0f7a56808138790d5472437aeb9761b487e1720f6c2db147a419,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765012472828861850,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ccc10db2-3a00-4383-80ab-805fd3af8161,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6f65a7e0016613d05aaf2daae13186f051dd1e1e72fc6802d5acdd53421dea,PodSandboxId:8afa3f705b0c6e4feab3450eb8883f9cbb51b27fb57059af1873c5b0173db425,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1765012461241906908,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-cghl5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da20bd88-903b-4cb3-bfa2-e07ba41ddf78,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5032adb8f732b213034bf5c01beb4d8a43caf71af8b71077d79ed631659e35d8,PodSandboxId:196c96f53ebd7eca5c62ef767a5585d3332a9690fb78d1a9c5753662a96715b7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765012434467670209,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wjfhp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e654e7d8-13a2-47be-a4f3-2e26e6350997,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c851074b8cff4771f8483139eed4b1df8fd501a563dd6576daaaa92457d4bd4a,PodSandboxId:1e9c8ab1f4f0a62113356cd2c2f5dbdb22a363ce60276bd95f12a6ac531365ba,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765012434346239312,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4c946,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: be11f3da-9722-453a-835b-e18b8f03516c,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476a9346a94d5e97321a7dad3b910fafc5ddb361d5194546e8b7203e9348e5ea,PodSandboxId:c7bb2700615f9b21f09b213f2165bf5bf18924a2725583d502f86217d81cb694,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765012412119893295,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4117e868-9c8a-440e-9af2-45709b4fbdc3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35444c80ca7de2189bace6e1fa56dbf377fff82be49890c85053e4d3183ce534,PodSandboxId:3acadb9fd10d100d5e337ab828043990d404ee560df077b0eaff596bf8c88e82,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765012389336047518,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-svq5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff554a2a-f7e8-4581-b0cc-821075d441f9,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194a16bb3f558c9e566d6dd9d5d4d7ad1544b1cfd117ed99a25359e961ff291f,PodSandboxId:31e85e1cb6b9807622c540332611d99b9261a6273e7058205a0ba0292d86a79f,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765012388959116107,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d85c1bd3-4a0c-4397-9c7d-4cb74f18e187,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c52ec85be8f0804ab5cfb12ca329e31a05a691124b480be6dad48aaf8b57dd5d,PodSandboxId:fe6d9f5f0c70386cc189f4d1509d794f0ed1542d0a663567fec6acbf84c47c3c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765012382415682102,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-l9grt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c33d79c-6db7-4610-b394-d2b81216197d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14cd309852195096b88efe9d3a347c387db1c0468ac7422480066369d7c24144,PodSandboxId:d90dd60e67b985e3e6869abab033af7459d9a60035ae735e6a1da4afeef2f574,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765012381813311355,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jzp4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df1c8ffd-d67f-46c3-aec5-6a7b099bce49,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cbfd50881df9ebf5b3ff65c7307461fe5f134037643baf679f5a2991aec5829,PodSandboxId:ab48cac50ef3adebb02fcd7be63a03640d303a6d5f911d3a50d60bbaae6e3d70,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765012368917345556,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 549fd7c125f874ea8194dda0339bd0ad,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d60b33a68a97763da81ea2c5b36356d161a454f9fdaedaedda9d6770b3a441c5,PodSandboxId:69abe07fabddb33f62a7450189eedce6dc9ae410a2aca409985fc6a444f396d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765012368908698593,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c042aa351a5b570e306966a6f284a804,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-p
ort\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cafa29f45be57922934aa2804adc6c03cfd405657efff21b020a18542e39b78,PodSandboxId:c3fa7030d6163b5a793cbf96f4803e4b43dfe48b945917fe7b354987e20ca53e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765012368861531869,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df90b68ae6a59daaf09af3b96ff025b7,},Annotations:map[
string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897c83e9715cf47989d744826a723d32ae6a225303a4d9621d6cb1b373e84ebb,PodSandboxId:b03ab93a9ca6007af5cfc2bf48cdead893b9ed565c7c7a9e99e2b0374799ef1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765012368854172850,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manage
r-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054878d143440cf1165e963a55f38038,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8836c094-defa-463e-a0c9-0d0b0b5d0f0a name=/runtime.v1.RuntimeService/ListContainers
Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.254701849Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b12f2ee-cd57-4a53-bc5b-c7c2ba61ede7 name=/runtime.v1.RuntimeService/Version
Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.254790319Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b12f2ee-cd57-4a53-bc5b-c7c2ba61ede7 name=/runtime.v1.RuntimeService/Version
Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.256711558Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eb907702-f28f-4e64-870a-64d3a48a56f5 name=/runtime.v1.ImageService/ImageFsInfo
Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.258325321Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765012659258229913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585488,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eb907702-f28f-4e64-870a-64d3a48a56f5 name=/runtime.v1.ImageService/ImageFsInfo
Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.259404440Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bfb6d2ed-5bb9-40dd-b5a9-228e258bccd4 name=/runtime.v1.RuntimeService/ListContainers
Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.259543553Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bfb6d2ed-5bb9-40dd-b5a9-228e258bccd4 name=/runtime.v1.RuntimeService/ListContainers
Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.260543804Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e1cee18d79c87bdb4fdcf0e2d5c674f013ebf86580772154072e1ed786f7ed7,PodSandboxId:71fcc4f6cb756a525536deb6b9d97220e091c16afb8f0ca488d3de14c216af5a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765012518156891547,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea010f3e-0b70-4331-8ef2-e8dbeb8da0dd,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3b54b242a4dbc9a6b8480824017e9c9c6efa05164c2e351137352cb17cd6cc,PodSandboxId:ffb5e4f0851d0f7a56808138790d5472437aeb9761b487e1720f6c2db147a419,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765012472828861850,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ccc10db2-3a00-4383-80ab-805fd3af8161,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6f65a7e0016613d05aaf2daae13186f051dd1e1e72fc6802d5acdd53421dea,PodSandboxId:8afa3f705b0c6e4feab3450eb8883f9cbb51b27fb57059af1873c5b0173db425,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1765012461241906908,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-cghl5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da20bd88-903b-4cb3-bfa2-e07ba41ddf78,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5032adb8f732b213034bf5c01beb4d8a43caf71af8b71077d79ed631659e35d8,PodSandboxId:196c96f53ebd7eca5c62ef767a5585d3332a9690fb78d1a9c5753662a96715b7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765012434467670209,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wjfhp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e654e7d8-13a2-47be-a4f3-2e26e6350997,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c851074b8cff4771f8483139eed4b1df8fd501a563dd6576daaaa92457d4bd4a,PodSandboxId:1e9c8ab1f4f0a62113356cd2c2f5dbdb22a363ce60276bd95f12a6ac531365ba,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765012434346239312,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4c946,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: be11f3da-9722-453a-835b-e18b8f03516c,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476a9346a94d5e97321a7dad3b910fafc5ddb361d5194546e8b7203e9348e5ea,PodSandboxId:c7bb2700615f9b21f09b213f2165bf5bf18924a2725583d502f86217d81cb694,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765012412119893295,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4117e868-9c8a-440e-9af2-45709b4fbdc3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35444c80ca7de2189bace6e1fa56dbf377fff82be49890c85053e4d3183ce534,PodSandboxId:3acadb9fd10d100d5e337ab828043990d404ee560df077b0eaff596bf8c88e82,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765012389336047518,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-svq5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff554a2a-f7e8-4581-b0cc-821075d441f9,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194a16bb3f558c9e566d6dd9d5d4d7ad1544b1cfd117ed99a25359e961ff291f,PodSandboxId:31e85e1cb6b9807622c540332611d99b9261a6273e7058205a0ba0292d86a79f,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765012388959116107,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d85c1bd3-4a0c-4397-9c7d-4cb74f18e187,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c52ec85be8f0804ab5cfb12ca329e31a05a691124b480be6dad48aaf8b57dd5d,PodSandboxId:fe6d9f5f0c70386cc189f4d1509d794f0ed1542d0a663567fec6acbf84c47c3c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765012382415682102,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-l9grt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c33d79c-6db7-4610-b394-d2b81216197d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14cd309852195096b88efe9d3a347c387db1c0468ac7422480066369d7c24144,PodSandboxId:d90dd60e67b985e3e6869abab033af7459d9a60035ae735e6a1da4afeef2f574,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765012381813311355,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jzp4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df1c8ffd-d67f-46c3-aec5-6a7b099bce49,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cbfd50881df9ebf5b3ff65c7307461fe5f134037643baf679f5a2991aec5829,PodSandboxId:ab48cac50ef3adebb02fcd7be63a03640d303a6d5f911d3a50d60bbaae6e3d70,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765012368917345556,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 549fd7c125f874ea8194dda0339bd0ad,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d60b33a68a97763da81ea2c5b36356d161a454f9fdaedaedda9d6770b3a441c5,PodSandboxId:69abe07fabddb33f62a7450189eedce6dc9ae410a2aca409985fc6a444f396d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765012368908698593,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c042aa351a5b570e306966a6f284a804,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-p
ort\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cafa29f45be57922934aa2804adc6c03cfd405657efff21b020a18542e39b78,PodSandboxId:c3fa7030d6163b5a793cbf96f4803e4b43dfe48b945917fe7b354987e20ca53e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765012368861531869,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df90b68ae6a59daaf09af3b96ff025b7,},Annotations:map[
string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897c83e9715cf47989d744826a723d32ae6a225303a4d9621d6cb1b373e84ebb,PodSandboxId:b03ab93a9ca6007af5cfc2bf48cdead893b9ed565c7c7a9e99e2b0374799ef1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765012368854172850,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manage
r-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054878d143440cf1165e963a55f38038,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bfb6d2ed-5bb9-40dd-b5a9-228e258bccd4 name=/runtime.v1.RuntimeService/ListContainers
Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.293170170Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a62433ef-2eca-4419-b418-a4d16b85bcd0 name=/runtime.v1.RuntimeService/Version
Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.293259732Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a62433ef-2eca-4419-b418-a4d16b85bcd0 name=/runtime.v1.RuntimeService/Version
Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.295210962Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=996d5602-b77d-4ebe-b3b6-4dd54d6dd333 name=/runtime.v1.ImageService/ImageFsInfo
Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.296599505Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765012659296563289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585488,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=996d5602-b77d-4ebe-b3b6-4dd54d6dd333 name=/runtime.v1.ImageService/ImageFsInfo
Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.297940744Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94746b33-f7dc-4546-aac4-8502f7e7f59c name=/runtime.v1.RuntimeService/ListContainers
Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.298002143Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94746b33-f7dc-4546-aac4-8502f7e7f59c name=/runtime.v1.RuntimeService/ListContainers
Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.298326589Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e1cee18d79c87bdb4fdcf0e2d5c674f013ebf86580772154072e1ed786f7ed7,PodSandboxId:71fcc4f6cb756a525536deb6b9d97220e091c16afb8f0ca488d3de14c216af5a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765012518156891547,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea010f3e-0b70-4331-8ef2-e8dbeb8da0dd,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3b54b242a4dbc9a6b8480824017e9c9c6efa05164c2e351137352cb17cd6cc,PodSandboxId:ffb5e4f0851d0f7a56808138790d5472437aeb9761b487e1720f6c2db147a419,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765012472828861850,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ccc10db2-3a00-4383-80ab-805fd3af8161,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6f65a7e0016613d05aaf2daae13186f051dd1e1e72fc6802d5acdd53421dea,PodSandboxId:8afa3f705b0c6e4feab3450eb8883f9cbb51b27fb57059af1873c5b0173db425,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1765012461241906908,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-cghl5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da20bd88-903b-4cb3-bfa2-e07ba41ddf78,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5032adb8f732b213034bf5c01beb4d8a43caf71af8b71077d79ed631659e35d8,PodSandboxId:196c96f53ebd7eca5c62ef767a5585d3332a9690fb78d1a9c5753662a96715b7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765012434467670209,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wjfhp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e654e7d8-13a2-47be-a4f3-2e26e6350997,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c851074b8cff4771f8483139eed4b1df8fd501a563dd6576daaaa92457d4bd4a,PodSandboxId:1e9c8ab1f4f0a62113356cd2c2f5dbdb22a363ce60276bd95f12a6ac531365ba,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765012434346239312,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4c946,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: be11f3da-9722-453a-835b-e18b8f03516c,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476a9346a94d5e97321a7dad3b910fafc5ddb361d5194546e8b7203e9348e5ea,PodSandboxId:c7bb2700615f9b21f09b213f2165bf5bf18924a2725583d502f86217d81cb694,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765012412119893295,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4117e868-9c8a-440e-9af2-45709b4fbdc3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35444c80ca7de2189bace6e1fa56dbf377fff82be49890c85053e4d3183ce534,PodSandboxId:3acadb9fd10d100d5e337ab828043990d404ee560df077b0eaff596bf8c88e82,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765012389336047518,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-svq5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff554a2a-f7e8-4581-b0cc-821075d441f9,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194a16bb3f558c9e566d6dd9d5d4d7ad1544b1cfd117ed99a25359e961ff291f,PodSandboxId:31e85e1cb6b9807622c540332611d99b9261a6273e7058205a0ba0292d86a79f,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765012388959116107,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d85c1bd3-4a0c-4397-9c7d-4cb74f18e187,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c52ec85be8f0804ab5cfb12ca329e31a05a691124b480be6dad48aaf8b57dd5d,PodSandboxId:fe6d9f5f0c70386cc189f4d1509d794f0ed1542d0a663567fec6acbf84c47c3c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765012382415682102,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-l9grt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c33d79c-6db7-4610-b394-d2b81216197d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14cd309852195096b88efe9d3a347c387db1c0468ac7422480066369d7c24144,PodSandboxId:d90dd60e67b985e3e6869abab033af7459d9a60035ae735e6a1da4afeef2f574,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765012381813311355,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jzp4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df1c8ffd-d67f-46c3-aec5-6a7b099bce49,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cbfd50881df9ebf5b3ff65c7307461fe5f134037643baf679f5a2991aec5829,PodSandboxId:ab48cac50ef3adebb02fcd7be63a03640d303a6d5f911d3a50d60bbaae6e3d70,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765012368917345556,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 549fd7c125f874ea8194dda0339bd0ad,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d60b33a68a97763da81ea2c5b36356d161a454f9fdaedaedda9d6770b3a441c5,PodSandboxId:69abe07fabddb33f62a7450189eedce6dc9ae410a2aca409985fc6a444f396d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765012368908698593,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c042aa351a5b570e306966a6f284a804,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-p
ort\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cafa29f45be57922934aa2804adc6c03cfd405657efff21b020a18542e39b78,PodSandboxId:c3fa7030d6163b5a793cbf96f4803e4b43dfe48b945917fe7b354987e20ca53e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765012368861531869,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df90b68ae6a59daaf09af3b96ff025b7,},Annotations:map[
string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897c83e9715cf47989d744826a723d32ae6a225303a4d9621d6cb1b373e84ebb,PodSandboxId:b03ab93a9ca6007af5cfc2bf48cdead893b9ed565c7c7a9e99e2b0374799ef1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765012368854172850,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manage
r-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054878d143440cf1165e963a55f38038,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94746b33-f7dc-4546-aac4-8502f7e7f59c name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
9e1cee18d79c8 docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 2 minutes ago Running nginx 0 71fcc4f6cb756 nginx default
4c3b54b242a4d gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 3 minutes ago Running busybox 0 ffb5e4f0851d0 busybox default
0b6f65a7e0016 registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27 3 minutes ago Running controller 0 8afa3f705b0c6 ingress-nginx-controller-6c8bf45fb-cghl5 ingress-nginx
5032adb8f732b registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f 3 minutes ago Exited patch 0 196c96f53ebd7 ingress-nginx-admission-patch-wjfhp ingress-nginx
c851074b8cff4 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f 3 minutes ago Exited create 0 1e9c8ab1f4f0a ingress-nginx-admission-create-4c946 ingress-nginx
476a9346a94d5 docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 4 minutes ago Running minikube-ingress-dns 0 c7bb2700615f9 kube-ingress-dns-minikube kube-system
35444c80ca7de docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 3acadb9fd10d1 amd-gpu-device-plugin-svq5h kube-system
194a16bb3f558 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 31e85e1cb6b98 storage-provisioner kube-system
c52ec85be8f08 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969 4 minutes ago Running coredns 0 fe6d9f5f0c703 coredns-66bc5c9577-l9grt kube-system
14cd309852195 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45 4 minutes ago Running kube-proxy 0 d90dd60e67b98 kube-proxy-jzp4f kube-system
2cbfd50881df9 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952 4 minutes ago Running kube-scheduler 0 ab48cac50ef3a kube-scheduler-addons-774690 kube-system
d60b33a68a977 a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1 4 minutes ago Running etcd 0 69abe07fabddb etcd-addons-774690 kube-system
0cafa29f45be5 a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85 4 minutes ago Running kube-apiserver 0 c3fa7030d6163 kube-apiserver-addons-774690 kube-system
897c83e9715cf 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8 4 minutes ago Running kube-controller-manager 0 b03ab93a9ca60 kube-controller-manager-addons-774690 kube-system
==> coredns [c52ec85be8f0804ab5cfb12ca329e31a05a691124b480be6dad48aaf8b57dd5d] <==
[INFO] 10.244.0.8:38479 - 22161 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.0000837s
[INFO] 10.244.0.8:38479 - 921 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000530187s
[INFO] 10.244.0.8:38479 - 33226 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000134107s
[INFO] 10.244.0.8:38479 - 27822 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00009165s
[INFO] 10.244.0.8:38479 - 30955 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000119976s
[INFO] 10.244.0.8:38479 - 57768 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000097078s
[INFO] 10.244.0.8:38479 - 34237 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000150233s
[INFO] 10.244.0.8:54447 - 25639 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000354338s
[INFO] 10.244.0.8:54447 - 25349 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000384249s
[INFO] 10.244.0.8:57241 - 24735 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000477066s
[INFO] 10.244.0.8:57241 - 24981 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000229799s
[INFO] 10.244.0.8:34151 - 4730 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000081528s
[INFO] 10.244.0.8:34151 - 5009 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000249135s
[INFO] 10.244.0.8:45428 - 59898 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000223567s
[INFO] 10.244.0.8:45428 - 60072 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000067055s
[INFO] 10.244.0.23:33031 - 61050 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000428534s
[INFO] 10.244.0.23:43838 - 37140 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000514283s
[INFO] 10.244.0.23:38503 - 47608 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124935s
[INFO] 10.244.0.23:46672 - 12201 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000116581s
[INFO] 10.244.0.23:51075 - 19560 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090169s
[INFO] 10.244.0.23:40804 - 24550 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090307s
[INFO] 10.244.0.23:36804 - 40666 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001298056s
[INFO] 10.244.0.23:54972 - 54554 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 306 0.001095651s
[INFO] 10.244.0.28:42545 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000593436s
[INFO] 10.244.0.28:53364 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000132493s
==> describe nodes <==
Name: addons-774690
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-774690
kubernetes.io/os=linux
minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
minikube.k8s.io/name=addons-774690
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_06T09_12_56_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-774690
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 06 Dec 2025 09:12:51 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-774690
AcquireTime: <unset>
RenewTime: Sat, 06 Dec 2025 09:17:31 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 06 Dec 2025 09:15:59 +0000 Sat, 06 Dec 2025 09:12:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 06 Dec 2025 09:15:59 +0000 Sat, 06 Dec 2025 09:12:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 06 Dec 2025 09:15:59 +0000 Sat, 06 Dec 2025 09:12:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 06 Dec 2025 09:15:59 +0000 Sat, 06 Dec 2025 09:12:56 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.249
Hostname: addons-774690
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001784Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001784Ki
pods: 110
System Info:
Machine ID: 6637641e43854e2fbcf4adf9edc82956
System UUID: 6637641e-4385-4e2f-bcf4-adf9edc82956
Boot ID: a93b70ca-ecc7-4c42-93b3-1bf205cb601f
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.34.2
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (13 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m11s
default hello-world-app-5d498dc89-twkk5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m26s
ingress-nginx ingress-nginx-controller-6c8bf45fb-cghl5 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m30s
kube-system amd-gpu-device-plugin-svq5h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m36s
kube-system coredns-66bc5c9577-l9grt 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m38s
kube-system etcd-addons-774690 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m44s
kube-system kube-apiserver-addons-774690 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m44s
kube-system kube-controller-manager-addons-774690 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m44s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m34s
kube-system kube-proxy-jzp4f 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m39s
kube-system kube-scheduler-addons-774690 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m44s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m33s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m36s kube-proxy
Normal NodeHasSufficientMemory 4m52s (x8 over 4m52s) kubelet Node addons-774690 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m52s (x8 over 4m52s) kubelet Node addons-774690 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m52s (x7 over 4m52s) kubelet Node addons-774690 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m52s kubelet Updated Node Allocatable limit across pods
Normal Starting 4m44s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m44s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m44s kubelet Node addons-774690 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m44s kubelet Node addons-774690 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m44s kubelet Node addons-774690 status is now: NodeHasSufficientPID
Normal NodeReady 4m43s kubelet Node addons-774690 status is now: NodeReady
Normal RegisteredNode 4m40s node-controller Node addons-774690 event: Registered Node addons-774690 in Controller
==> dmesg <==
[ +3.802472] kauditd_printk_skb: 275 callbacks suppressed
[ +5.941853] kauditd_printk_skb: 5 callbacks suppressed
[ +9.404962] kauditd_printk_skb: 11 callbacks suppressed
[ +8.383101] kauditd_printk_skb: 26 callbacks suppressed
[ +7.801662] kauditd_printk_skb: 32 callbacks suppressed
[ +6.035811] kauditd_printk_skb: 56 callbacks suppressed
[ +3.791918] kauditd_printk_skb: 66 callbacks suppressed
[Dec 6 09:14] kauditd_printk_skb: 122 callbacks suppressed
[ +3.685175] kauditd_printk_skb: 120 callbacks suppressed
[ +0.000035] kauditd_printk_skb: 59 callbacks suppressed
[ +5.854294] kauditd_printk_skb: 53 callbacks suppressed
[ +3.586770] kauditd_printk_skb: 47 callbacks suppressed
[ +10.492396] kauditd_printk_skb: 17 callbacks suppressed
[ +0.000025] kauditd_printk_skb: 22 callbacks suppressed
[ +0.782217] kauditd_printk_skb: 107 callbacks suppressed
[Dec 6 09:15] kauditd_printk_skb: 105 callbacks suppressed
[ +0.533976] kauditd_printk_skb: 114 callbacks suppressed
[ +5.601089] kauditd_printk_skb: 152 callbacks suppressed
[ +5.748227] kauditd_printk_skb: 79 callbacks suppressed
[ +0.000031] kauditd_printk_skb: 15 callbacks suppressed
[ +5.966558] kauditd_printk_skb: 26 callbacks suppressed
[ +6.096283] kauditd_printk_skb: 25 callbacks suppressed
[ +1.158400] kauditd_printk_skb: 46 callbacks suppressed
[ +6.725997] kauditd_printk_skb: 5 callbacks suppressed
[Dec 6 09:17] kauditd_printk_skb: 127 callbacks suppressed
==> etcd [d60b33a68a97763da81ea2c5b36356d161a454f9fdaedaedda9d6770b3a441c5] <==
{"level":"info","ts":"2025-12-06T09:14:18.354404Z","caller":"traceutil/trace.go:172","msg":"trace[921660259] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1171; }","duration":"197.744138ms","start":"2025-12-06T09:14:18.156650Z","end":"2025-12-06T09:14:18.354395Z","steps":["trace[921660259] 'agreement among raft nodes before linearized reading' (duration: 197.330319ms)"],"step_count":1}
{"level":"info","ts":"2025-12-06T09:14:19.606858Z","caller":"traceutil/trace.go:172","msg":"trace[609749521] linearizableReadLoop","detail":"{readStateIndex:1203; appliedIndex:1203; }","duration":"233.521525ms","start":"2025-12-06T09:14:19.373296Z","end":"2025-12-06T09:14:19.606817Z","steps":["trace[609749521] 'read index received' (duration: 233.515626ms)","trace[609749521] 'applied index is now lower than readState.Index' (duration: 4.782µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-06T09:14:19.607011Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"233.698844ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" limit:1 ","response":"range_response_count:1 size:554"}
{"level":"info","ts":"2025-12-06T09:14:19.607029Z","caller":"traceutil/trace.go:172","msg":"trace[1978986472] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1172; }","duration":"233.73196ms","start":"2025-12-06T09:14:19.373292Z","end":"2025-12-06T09:14:19.607024Z","steps":["trace[1978986472] 'agreement among raft nodes before linearized reading' (duration: 233.624499ms)"],"step_count":1}
{"level":"info","ts":"2025-12-06T09:14:19.607041Z","caller":"traceutil/trace.go:172","msg":"trace[1381779295] transaction","detail":"{read_only:false; response_revision:1173; number_of_response:1; }","duration":"254.320682ms","start":"2025-12-06T09:14:19.352710Z","end":"2025-12-06T09:14:19.607030Z","steps":["trace[1381779295] 'process raft request' (duration: 254.234176ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-06T09:14:19.607231Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"180.406511ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-06T09:14:19.607250Z","caller":"traceutil/trace.go:172","msg":"trace[1179621402] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1173; }","duration":"180.427816ms","start":"2025-12-06T09:14:19.426817Z","end":"2025-12-06T09:14:19.607245Z","steps":["trace[1179621402] 'agreement among raft nodes before linearized reading' (duration: 180.381933ms)"],"step_count":1}
{"level":"info","ts":"2025-12-06T09:14:30.296338Z","caller":"traceutil/trace.go:172","msg":"trace[1351520407] transaction","detail":"{read_only:false; response_revision:1227; number_of_response:1; }","duration":"126.632898ms","start":"2025-12-06T09:14:30.169680Z","end":"2025-12-06T09:14:30.296313Z","steps":["trace[1351520407] 'process raft request' (duration: 126.451658ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-06T09:14:55.948736Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"165.586397ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"warn","ts":"2025-12-06T09:14:55.948829Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"213.553623ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-06T09:14:55.948850Z","caller":"traceutil/trace.go:172","msg":"trace[1311282944] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1390; }","duration":"213.569189ms","start":"2025-12-06T09:14:55.735273Z","end":"2025-12-06T09:14:55.948842Z","steps":["trace[1311282944] 'range keys from in-memory index tree' (duration: 213.495998ms)"],"step_count":1}
{"level":"info","ts":"2025-12-06T09:14:55.948855Z","caller":"traceutil/trace.go:172","msg":"trace[1384824861] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1390; }","duration":"165.726562ms","start":"2025-12-06T09:14:55.783108Z","end":"2025-12-06T09:14:55.948835Z","steps":["trace[1384824861] 'range keys from in-memory index tree' (duration: 165.52952ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-06T09:14:55.948785Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.556436ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourceclaimtemplates\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-06T09:14:55.948997Z","caller":"traceutil/trace.go:172","msg":"trace[298272656] range","detail":"{range_begin:/registry/resourceclaimtemplates; range_end:; response_count:0; response_revision:1390; }","duration":"135.773447ms","start":"2025-12-06T09:14:55.813217Z","end":"2025-12-06T09:14:55.948991Z","steps":["trace[298272656] 'range keys from in-memory index tree' (duration: 135.456088ms)"],"step_count":1}
{"level":"info","ts":"2025-12-06T09:14:58.317621Z","caller":"traceutil/trace.go:172","msg":"trace[912941987] linearizableReadLoop","detail":"{readStateIndex:1437; appliedIndex:1437; }","duration":"160.294816ms","start":"2025-12-06T09:14:58.157307Z","end":"2025-12-06T09:14:58.317601Z","steps":["trace[912941987] 'read index received' (duration: 160.287803ms)","trace[912941987] 'applied index is now lower than readState.Index' (duration: 6.248µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-06T09:14:58.317800Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.485946ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-06T09:14:58.317821Z","caller":"traceutil/trace.go:172","msg":"trace[282143720] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1397; }","duration":"160.541611ms","start":"2025-12-06T09:14:58.157274Z","end":"2025-12-06T09:14:58.317815Z","steps":["trace[282143720] 'agreement among raft nodes before linearized reading' (duration: 160.464695ms)"],"step_count":1}
{"level":"info","ts":"2025-12-06T09:14:58.317813Z","caller":"traceutil/trace.go:172","msg":"trace[782008309] transaction","detail":"{read_only:false; response_revision:1397; number_of_response:1; }","duration":"175.000766ms","start":"2025-12-06T09:14:58.142734Z","end":"2025-12-06T09:14:58.317734Z","steps":["trace[782008309] 'process raft request' (duration: 174.889306ms)"],"step_count":1}
{"level":"info","ts":"2025-12-06T09:15:23.740170Z","caller":"traceutil/trace.go:172","msg":"trace[23445071] transaction","detail":"{read_only:false; response_revision:1641; number_of_response:1; }","duration":"191.869829ms","start":"2025-12-06T09:15:23.548080Z","end":"2025-12-06T09:15:23.739950Z","steps":["trace[23445071] 'process raft request' (duration: 190.780885ms)"],"step_count":1}
{"level":"info","ts":"2025-12-06T09:15:33.244235Z","caller":"traceutil/trace.go:172","msg":"trace[267520751] linearizableReadLoop","detail":"{readStateIndex:1747; appliedIndex:1747; }","duration":"153.066742ms","start":"2025-12-06T09:15:33.091152Z","end":"2025-12-06T09:15:33.244219Z","steps":["trace[267520751] 'read index received' (duration: 153.061629ms)","trace[267520751] 'applied index is now lower than readState.Index' (duration: 4.349µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-06T09:15:33.244365Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"153.208518ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-06T09:15:33.244396Z","caller":"traceutil/trace.go:172","msg":"trace[1235348292] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1689; }","duration":"153.256537ms","start":"2025-12-06T09:15:33.091134Z","end":"2025-12-06T09:15:33.244391Z","steps":["trace[1235348292] 'agreement among raft nodes before linearized reading' (duration: 153.183416ms)"],"step_count":1}
{"level":"info","ts":"2025-12-06T09:15:33.244527Z","caller":"traceutil/trace.go:172","msg":"trace[830658517] transaction","detail":"{read_only:false; response_revision:1690; number_of_response:1; }","duration":"349.777776ms","start":"2025-12-06T09:15:32.894737Z","end":"2025-12-06T09:15:33.244514Z","steps":["trace[830658517] 'process raft request' (duration: 349.54617ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-06T09:15:33.244951Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:15:32.894717Z","time spent":"349.849129ms","remote":"127.0.0.1:43552","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1689 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
{"level":"info","ts":"2025-12-06T09:16:03.537650Z","caller":"traceutil/trace.go:172","msg":"trace[1245462227] transaction","detail":"{read_only:false; response_revision:1916; number_of_response:1; }","duration":"119.604921ms","start":"2025-12-06T09:16:03.418010Z","end":"2025-12-06T09:16:03.537615Z","steps":["trace[1245462227] 'process raft request' (duration: 119.496056ms)"],"step_count":1}
==> kernel <==
09:17:39 up 5 min, 0 users, load average: 0.77, 1.72, 0.90
Linux addons-774690 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec 4 13:30:13 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [0cafa29f45be57922934aa2804adc6c03cfd405657efff21b020a18542e39b78] <==
E1206 09:13:54.731278 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.159.197:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.159.197:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.159.197:443: connect: connection refused" logger="UnhandledError"
E1206 09:13:54.735509 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.159.197:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.159.197:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.159.197:443: connect: connection refused" logger="UnhandledError"
I1206 09:13:54.866579 1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E1206 09:14:40.112004 1 conn.go:339] Error on socket receive: read tcp 192.168.39.249:8443->192.168.39.1:47208: use of closed network connection
E1206 09:14:40.307842 1 conn.go:339] Error on socket receive: read tcp 192.168.39.249:8443->192.168.39.1:47240: use of closed network connection
I1206 09:14:49.697039 1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.15.3"}
I1206 09:15:13.561584 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
I1206 09:15:13.783680 1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.104.58"}
E1206 09:15:20.595753 1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
I1206 09:15:40.392818 1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I1206 09:15:54.995815 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1206 09:15:54.995867 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1206 09:15:55.029570 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1206 09:15:55.029620 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1206 09:15:55.042773 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1206 09:15:55.043098 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1206 09:15:55.062398 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1206 09:15:55.062494 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1206 09:15:55.182796 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1206 09:15:55.182842 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1206 09:15:55.768108 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
W1206 09:15:56.043377 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W1206 09:15:56.183737 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W1206 09:15:56.199883 1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
I1206 09:17:38.151373 1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.33.12"}
==> kube-controller-manager [897c83e9715cf47989d744826a723d32ae6a225303a4d9621d6cb1b373e84ebb] <==
E1206 09:16:00.601119 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1206 09:16:03.533985 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1206 09:16:03.535075 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1206 09:16:03.868373 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1206 09:16:03.869551 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1206 09:16:06.537782 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1206 09:16:06.538845 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1206 09:16:09.941092 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1206 09:16:09.942265 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1206 09:16:10.586246 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1206 09:16:10.587199 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1206 09:16:17.429757 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1206 09:16:17.430892 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1206 09:16:25.114798 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1206 09:16:25.115987 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1206 09:16:32.491786 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1206 09:16:32.493551 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1206 09:16:42.247566 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1206 09:16:42.248720 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1206 09:16:51.259080 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1206 09:16:51.260232 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1206 09:17:09.086011 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1206 09:17:09.087232 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1206 09:17:21.444673 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1206 09:17:21.445841 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
==> kube-proxy [14cd309852195096b88efe9d3a347c387db1c0468ac7422480066369d7c24144] <==
I1206 09:13:02.668513 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1206 09:13:02.771288 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1206 09:13:02.771380 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.249"]
E1206 09:13:02.771537 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1206 09:13:03.068036 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1206 09:13:03.068152 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1206 09:13:03.068194 1 server_linux.go:132] "Using iptables Proxier"
I1206 09:13:03.083320 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1206 09:13:03.085020 1 server.go:527] "Version info" version="v1.34.2"
I1206 09:13:03.085049 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1206 09:13:03.101225 1 config.go:200] "Starting service config controller"
I1206 09:13:03.101256 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1206 09:13:03.101276 1 config.go:106] "Starting endpoint slice config controller"
I1206 09:13:03.101280 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1206 09:13:03.101290 1 config.go:403] "Starting serviceCIDR config controller"
I1206 09:13:03.101293 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1206 09:13:03.102020 1 config.go:309] "Starting node config controller"
I1206 09:13:03.102046 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1206 09:13:03.102053 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1206 09:13:03.202341 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1206 09:13:03.202423 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1206 09:13:03.203473 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
==> kube-scheduler [2cbfd50881df9ebf5b3ff65c7307461fe5f134037643baf679f5a2991aec5829] <==
E1206 09:12:52.240388 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1206 09:12:52.240929 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1206 09:12:52.241070 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1206 09:12:52.241235 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1206 09:12:52.241363 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1206 09:12:53.036841 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1206 09:12:53.041394 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1206 09:12:53.061793 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1206 09:12:53.083291 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1206 09:12:53.107931 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1206 09:12:53.146413 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1206 09:12:53.206695 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1206 09:12:53.212550 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1206 09:12:53.223342 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1206 09:12:53.388885 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1206 09:12:53.437116 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1206 09:12:53.470695 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1206 09:12:53.505539 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1206 09:12:53.546058 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1206 09:12:53.603586 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1206 09:12:53.668946 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1206 09:12:53.726983 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1206 09:12:53.778791 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1206 09:12:53.780020 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
I1206 09:12:56.212824 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Dec 06 09:16:05 addons-774690 kubelet[1485]: E1206 09:16:05.731073 1485 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765012565730609672 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
Dec 06 09:16:05 addons-774690 kubelet[1485]: E1206 09:16:05.731100 1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765012565730609672 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
Dec 06 09:16:15 addons-774690 kubelet[1485]: E1206 09:16:15.734404 1485 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765012575733988994 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
Dec 06 09:16:15 addons-774690 kubelet[1485]: E1206 09:16:15.734503 1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765012575733988994 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
Dec 06 09:16:20 addons-774690 kubelet[1485]: I1206 09:16:20.345405 1485 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-l9grt" secret="" err="secret \"gcp-auth\" not found"
Dec 06 09:16:25 addons-774690 kubelet[1485]: E1206 09:16:25.737858 1485 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765012585737258412 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
Dec 06 09:16:25 addons-774690 kubelet[1485]: E1206 09:16:25.737886 1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765012585737258412 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
Dec 06 09:16:35 addons-774690 kubelet[1485]: E1206 09:16:35.741764 1485 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765012595740823775 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
Dec 06 09:16:35 addons-774690 kubelet[1485]: E1206 09:16:35.742120 1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765012595740823775 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
Dec 06 09:16:41 addons-774690 kubelet[1485]: I1206 09:16:41.345807 1485 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-svq5h" secret="" err="secret \"gcp-auth\" not found"
Dec 06 09:16:45 addons-774690 kubelet[1485]: E1206 09:16:45.744806 1485 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765012605744093747 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
Dec 06 09:16:45 addons-774690 kubelet[1485]: E1206 09:16:45.744836 1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765012605744093747 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
Dec 06 09:16:55 addons-774690 kubelet[1485]: E1206 09:16:55.748424 1485 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765012615747979471 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
Dec 06 09:16:55 addons-774690 kubelet[1485]: E1206 09:16:55.748521 1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765012615747979471 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
Dec 06 09:17:05 addons-774690 kubelet[1485]: E1206 09:17:05.752770 1485 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765012625752217517 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
Dec 06 09:17:05 addons-774690 kubelet[1485]: E1206 09:17:05.752807 1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765012625752217517 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
Dec 06 09:17:10 addons-774690 kubelet[1485]: I1206 09:17:10.345776 1485 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Dec 06 09:17:15 addons-774690 kubelet[1485]: E1206 09:17:15.756737 1485 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765012635755795997 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
Dec 06 09:17:15 addons-774690 kubelet[1485]: E1206 09:17:15.756763 1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765012635755795997 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
Dec 06 09:17:25 addons-774690 kubelet[1485]: E1206 09:17:25.760101 1485 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765012645759702985 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
Dec 06 09:17:25 addons-774690 kubelet[1485]: E1206 09:17:25.760142 1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765012645759702985 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
Dec 06 09:17:27 addons-774690 kubelet[1485]: I1206 09:17:27.345401 1485 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-l9grt" secret="" err="secret \"gcp-auth\" not found"
Dec 06 09:17:35 addons-774690 kubelet[1485]: E1206 09:17:35.763139 1485 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765012655762686436 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
Dec 06 09:17:35 addons-774690 kubelet[1485]: E1206 09:17:35.763172 1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765012655762686436 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
Dec 06 09:17:38 addons-774690 kubelet[1485]: I1206 09:17:38.111764 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdj4d\" (UniqueName: \"kubernetes.io/projected/fc885bae-3988-48eb-958b-64907ecbaeb5-kube-api-access-wdj4d\") pod \"hello-world-app-5d498dc89-twkk5\" (UID: \"fc885bae-3988-48eb-958b-64907ecbaeb5\") " pod="default/hello-world-app-5d498dc89-twkk5"
==> storage-provisioner [194a16bb3f558c9e566d6dd9d5d4d7ad1544b1cfd117ed99a25359e961ff291f] <==
W1206 09:17:13.917473 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 09:17:15.921339 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 09:17:15.929224 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 09:17:17.933510 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 09:17:17.939527 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 09:17:19.943936 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 09:17:19.953945 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 09:17:21.958046 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 09:17:21.965502 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 09:17:23.969620 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 09:17:23.978424 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 09:17:25.982287 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 09:17:25.987383 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 09:17:27.991585 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 09:17:27.997313 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 09:17:30.001494 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 09:17:30.007296 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 09:17:32.011221 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 09:17:32.017303 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 09:17:34.021849 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 09:17:34.027062 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 09:17:36.030122 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 09:17:36.039173 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 09:17:38.083520 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1206 09:17:38.099741 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-774690 -n addons-774690
helpers_test.go:269: (dbg) Run: kubectl --context addons-774690 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-twkk5 ingress-nginx-admission-create-4c946 ingress-nginx-admission-patch-wjfhp
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context addons-774690 describe pod hello-world-app-5d498dc89-twkk5 ingress-nginx-admission-create-4c946 ingress-nginx-admission-patch-wjfhp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-774690 describe pod hello-world-app-5d498dc89-twkk5 ingress-nginx-admission-create-4c946 ingress-nginx-admission-patch-wjfhp: exit status 1 (75.758633ms)
-- stdout --
Name: hello-world-app-5d498dc89-twkk5
Namespace: default
Priority: 0
Service Account: default
Node: addons-774690/192.168.39.249
Start Time: Sat, 06 Dec 2025 09:17:38 +0000
Labels: app=hello-world-app
pod-template-hash=5d498dc89
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-world-app-5d498dc89
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wdj4d (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-wdj4d:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2s default-scheduler Successfully assigned default/hello-world-app-5d498dc89-twkk5 to addons-774690
Normal Pulling 2s kubelet Pulling image "docker.io/kicbase/echo-server:1.0"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-4c946" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-wjfhp" not found
** /stderr **
helpers_test.go:287: kubectl --context addons-774690 describe pod hello-world-app-5d498dc89-twkk5 ingress-nginx-admission-create-4c946 ingress-nginx-admission-patch-wjfhp: exit status 1
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-774690 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-774690 addons disable ingress-dns --alsologtostderr -v=1: (1.466195759s)
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-774690 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-774690 addons disable ingress --alsologtostderr -v=1: (7.781134099s)
--- FAIL: TestAddons/parallel/Ingress (156.33s)