=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run: kubectl --context addons-153147 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run: kubectl --context addons-153147 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run: kubectl --context addons-153147 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [4486d923-4013-47f9-8cd9-a81f1ddebd66] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [4486d923-4013-47f9-8cd9-a81f1ddebd66] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.004990295s
I1201 19:08:33.763372 16868 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run: out/minikube-linux-amd64 -p addons-153147 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-153147 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.498830247s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run: kubectl --context addons-153147 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run: out/minikube-linux-amd64 -p addons-153147 ip
addons_test.go:299: (dbg) Run: nslookup hello-john.test 192.168.39.9
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-153147 -n addons-153147
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p addons-153147 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-153147 logs -n 25: (1.177767158s)
helpers_test.go:260: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ delete │ -p download-only-433667 │ download-only-433667 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
│ start │ --download-only -p binary-mirror-004263 --alsologtostderr --binary-mirror http://127.0.0.1:36255 --driver=kvm2 --container-runtime=crio │ binary-mirror-004263 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ │
│ delete │ -p binary-mirror-004263 │ binary-mirror-004263 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
│ addons │ enable dashboard -p addons-153147 │ addons-153147 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ │
│ addons │ disable dashboard -p addons-153147 │ addons-153147 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ │
│ start │ -p addons-153147 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-153147 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:07 UTC │
│ addons │ addons-153147 addons disable volcano --alsologtostderr -v=1 │ addons-153147 │ jenkins │ v1.37.0 │ 01 Dec 25 19:07 UTC │ 01 Dec 25 19:07 UTC │
│ addons │ addons-153147 addons disable gcp-auth --alsologtostderr -v=1 │ addons-153147 │ jenkins │ v1.37.0 │ 01 Dec 25 19:07 UTC │ 01 Dec 25 19:08 UTC │
│ addons │ addons-153147 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-153147 │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
│ addons │ addons-153147 addons disable yakd --alsologtostderr -v=1 │ addons-153147 │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
│ addons │ addons-153147 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-153147 │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
│ addons │ enable headlamp -p addons-153147 --alsologtostderr -v=1 │ addons-153147 │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
│ addons │ addons-153147 addons disable metrics-server --alsologtostderr -v=1 │ addons-153147 │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
│ ip │ addons-153147 ip │ addons-153147 │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
│ addons │ addons-153147 addons disable registry --alsologtostderr -v=1 │ addons-153147 │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
│ ssh │ addons-153147 ssh cat /opt/local-path-provisioner/pvc-4148b11a-9b36-46c4-a96c-f1c2e80569aa_default_test-pvc/file1 │ addons-153147 │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
│ addons │ addons-153147 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-153147 │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:09 UTC │
│ addons │ addons-153147 addons disable headlamp --alsologtostderr -v=1 │ addons-153147 │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
│ ssh │ addons-153147 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-153147 │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ │
│ addons │ addons-153147 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-153147 │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-153147 │ addons-153147 │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
│ addons │ addons-153147 addons disable registry-creds --alsologtostderr -v=1 │ addons-153147 │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
│ addons │ addons-153147 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-153147 │ jenkins │ v1.37.0 │ 01 Dec 25 19:09 UTC │ 01 Dec 25 19:09 UTC │
│ addons │ addons-153147 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-153147 │ jenkins │ v1.37.0 │ 01 Dec 25 19:09 UTC │ 01 Dec 25 19:09 UTC │
│ ip │ addons-153147 ip │ addons-153147 │ jenkins │ v1.37.0 │ 01 Dec 25 19:10 UTC │ 01 Dec 25 19:10 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/01 19:05:31
Running on machine: ubuntu-20-agent-8
Binary: Built with gc go1.25.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1201 19:05:31.586200 17783 out.go:360] Setting OutFile to fd 1 ...
I1201 19:05:31.586307 17783 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:05:31.586316 17783 out.go:374] Setting ErrFile to fd 2...
I1201 19:05:31.586320 17783 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:05:31.586507 17783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
I1201 19:05:31.587012 17783 out.go:368] Setting JSON to false
I1201 19:05:31.587800 17783 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2875,"bootTime":1764613057,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1201 19:05:31.587869 17783 start.go:143] virtualization: kvm guest
I1201 19:05:31.589644 17783 out.go:179] * [addons-153147] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1201 19:05:31.590948 17783 out.go:179] - MINIKUBE_LOCATION=21997
I1201 19:05:31.590966 17783 notify.go:221] Checking for updates...
I1201 19:05:31.593340 17783 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1201 19:05:31.594408 17783 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21997-12903/kubeconfig
I1201 19:05:31.595550 17783 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12903/.minikube
I1201 19:05:31.596925 17783 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1201 19:05:31.598064 17783 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1201 19:05:31.599325 17783 driver.go:422] Setting default libvirt URI to qemu:///system
I1201 19:05:31.628951 17783 out.go:179] * Using the kvm2 driver based on user configuration
I1201 19:05:31.630049 17783 start.go:309] selected driver: kvm2
I1201 19:05:31.630068 17783 start.go:927] validating driver "kvm2" against <nil>
I1201 19:05:31.630078 17783 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1201 19:05:31.630735 17783 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1201 19:05:31.631008 17783 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1201 19:05:31.631034 17783 cni.go:84] Creating CNI manager for ""
I1201 19:05:31.631070 17783 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1201 19:05:31.631078 17783 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1201 19:05:31.631116 17783 start.go:353] cluster config:
{Name:addons-153147 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-153147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
I1201 19:05:31.631220 17783 iso.go:125] acquiring lock: {Name:mk6a50ce57553a723db22dad35f70cd00228e9bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1201 19:05:31.633418 17783 out.go:179] * Starting "addons-153147" primary control-plane node in "addons-153147" cluster
I1201 19:05:31.634405 17783 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1201 19:05:31.634430 17783 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-12903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
I1201 19:05:31.634437 17783 cache.go:65] Caching tarball of preloaded images
I1201 19:05:31.634515 17783 preload.go:238] Found /home/jenkins/minikube-integration/21997-12903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1201 19:05:31.634525 17783 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
I1201 19:05:31.634822 17783 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/config.json ...
I1201 19:05:31.634855 17783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/config.json: {Name:mk849ecfa6433efccbb5c4bb5f92de012794f1c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1201 19:05:31.634982 17783 start.go:360] acquireMachinesLock for addons-153147: {Name:mka5785482004af70e425c1e38474157ff061d66 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1201 19:05:31.635032 17783 start.go:364] duration metric: took 37.672µs to acquireMachinesLock for "addons-153147"
I1201 19:05:31.635051 17783 start.go:93] Provisioning new machine with config: &{Name:addons-153147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-153147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1201 19:05:31.635092 17783 start.go:125] createHost starting for "" (driver="kvm2")
I1201 19:05:31.636968 17783 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1201 19:05:31.637118 17783 start.go:159] libmachine.API.Create for "addons-153147" (driver="kvm2")
I1201 19:05:31.637145 17783 client.go:173] LocalClient.Create starting
I1201 19:05:31.637232 17783 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem
I1201 19:05:31.750756 17783 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem
I1201 19:05:31.908343 17783 main.go:143] libmachine: creating domain...
I1201 19:05:31.908364 17783 main.go:143] libmachine: creating network...
I1201 19:05:31.909718 17783 main.go:143] libmachine: found existing default network
I1201 19:05:31.909949 17783 main.go:143] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1201 19:05:31.910425 17783 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e362d0}
I1201 19:05:31.910510 17783 main.go:143] libmachine: defining private network:
<network>
<name>mk-addons-153147</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1201 19:05:31.916164 17783 main.go:143] libmachine: creating private network mk-addons-153147 192.168.39.0/24...
I1201 19:05:31.981515 17783 main.go:143] libmachine: private network mk-addons-153147 192.168.39.0/24 created
I1201 19:05:31.981860 17783 main.go:143] libmachine: <network>
<name>mk-addons-153147</name>
<uuid>b23d370b-3063-4245-a4b3-cd356384ef08</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:5a:39:d5'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1201 19:05:31.981889 17783 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147 ...
I1201 19:05:31.981910 17783 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21997-12903/.minikube/cache/iso/amd64/minikube-v1.37.0-1764600683-21997-amd64.iso
I1201 19:05:31.981920 17783 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21997-12903/.minikube
I1201 19:05:31.981974 17783 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21997-12903/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21997-12903/.minikube/cache/iso/amd64/minikube-v1.37.0-1764600683-21997-amd64.iso...
I1201 19:05:32.250954 17783 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa...
I1201 19:05:32.362602 17783 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/addons-153147.rawdisk...
I1201 19:05:32.362642 17783 main.go:143] libmachine: Writing magic tar header
I1201 19:05:32.362667 17783 main.go:143] libmachine: Writing SSH key tar header
I1201 19:05:32.362741 17783 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147 ...
I1201 19:05:32.362805 17783 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147
I1201 19:05:32.362840 17783 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147 (perms=drwx------)
I1201 19:05:32.362850 17783 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-12903/.minikube/machines
I1201 19:05:32.362858 17783 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-12903/.minikube/machines (perms=drwxr-xr-x)
I1201 19:05:32.362868 17783 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-12903/.minikube
I1201 19:05:32.362876 17783 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-12903/.minikube (perms=drwxr-xr-x)
I1201 19:05:32.362885 17783 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-12903
I1201 19:05:32.362893 17783 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-12903 (perms=drwxrwxr-x)
I1201 19:05:32.362903 17783 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1201 19:05:32.362910 17783 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1201 19:05:32.362922 17783 main.go:143] libmachine: checking permissions on dir: /home/jenkins
I1201 19:05:32.362930 17783 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1201 19:05:32.362941 17783 main.go:143] libmachine: checking permissions on dir: /home
I1201 19:05:32.362954 17783 main.go:143] libmachine: skipping /home - not owner
I1201 19:05:32.362960 17783 main.go:143] libmachine: defining domain...
I1201 19:05:32.364342 17783 main.go:143] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-153147</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/addons-153147.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-153147'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1201 19:05:32.371809 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:74:84:3b in network default
I1201 19:05:32.372397 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:32.372414 17783 main.go:143] libmachine: starting domain...
I1201 19:05:32.372418 17783 main.go:143] libmachine: ensuring networks are active...
I1201 19:05:32.373133 17783 main.go:143] libmachine: Ensuring network default is active
I1201 19:05:32.373457 17783 main.go:143] libmachine: Ensuring network mk-addons-153147 is active
I1201 19:05:32.374013 17783 main.go:143] libmachine: getting domain XML...
I1201 19:05:32.375027 17783 main.go:143] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-153147</name>
<uuid>b210d02d-07be-4131-97b5-bb937549f8ab</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/addons-153147.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:b9:bf:db'/>
<source network='mk-addons-153147'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:74:84:3b'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1201 19:05:33.655296 17783 main.go:143] libmachine: waiting for domain to start...
I1201 19:05:33.656813 17783 main.go:143] libmachine: domain is now running
I1201 19:05:33.656851 17783 main.go:143] libmachine: waiting for IP...
I1201 19:05:33.657848 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:33.658454 17783 main.go:143] libmachine: no network interface addresses found for domain addons-153147 (source=lease)
I1201 19:05:33.658512 17783 main.go:143] libmachine: trying to list again with source=arp
I1201 19:05:33.658853 17783 main.go:143] libmachine: unable to find current IP address of domain addons-153147 in network mk-addons-153147 (interfaces detected: [])
I1201 19:05:33.658905 17783 retry.go:31] will retry after 269.230888ms: waiting for domain to come up
I1201 19:05:33.929366 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:33.929855 17783 main.go:143] libmachine: no network interface addresses found for domain addons-153147 (source=lease)
I1201 19:05:33.929870 17783 main.go:143] libmachine: trying to list again with source=arp
I1201 19:05:33.930157 17783 main.go:143] libmachine: unable to find current IP address of domain addons-153147 in network mk-addons-153147 (interfaces detected: [])
I1201 19:05:33.930197 17783 retry.go:31] will retry after 305.63835ms: waiting for domain to come up
I1201 19:05:34.237864 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:34.238366 17783 main.go:143] libmachine: no network interface addresses found for domain addons-153147 (source=lease)
I1201 19:05:34.238380 17783 main.go:143] libmachine: trying to list again with source=arp
I1201 19:05:34.238652 17783 main.go:143] libmachine: unable to find current IP address of domain addons-153147 in network mk-addons-153147 (interfaces detected: [])
I1201 19:05:34.238690 17783 retry.go:31] will retry after 446.840166ms: waiting for domain to come up
I1201 19:05:34.687368 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:34.687897 17783 main.go:143] libmachine: no network interface addresses found for domain addons-153147 (source=lease)
I1201 19:05:34.687911 17783 main.go:143] libmachine: trying to list again with source=arp
I1201 19:05:34.688219 17783 main.go:143] libmachine: unable to find current IP address of domain addons-153147 in network mk-addons-153147 (interfaces detected: [])
I1201 19:05:34.688251 17783 retry.go:31] will retry after 482.929364ms: waiting for domain to come up
I1201 19:05:35.172982 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:35.173477 17783 main.go:143] libmachine: no network interface addresses found for domain addons-153147 (source=lease)
I1201 19:05:35.173492 17783 main.go:143] libmachine: trying to list again with source=arp
I1201 19:05:35.173818 17783 main.go:143] libmachine: unable to find current IP address of domain addons-153147 in network mk-addons-153147 (interfaces detected: [])
I1201 19:05:35.173861 17783 retry.go:31] will retry after 517.844571ms: waiting for domain to come up
I1201 19:05:35.693488 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:35.694026 17783 main.go:143] libmachine: no network interface addresses found for domain addons-153147 (source=lease)
I1201 19:05:35.694043 17783 main.go:143] libmachine: trying to list again with source=arp
I1201 19:05:35.694401 17783 main.go:143] libmachine: unable to find current IP address of domain addons-153147 in network mk-addons-153147 (interfaces detected: [])
I1201 19:05:35.694434 17783 retry.go:31] will retry after 589.021743ms: waiting for domain to come up
I1201 19:05:36.285251 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:36.285755 17783 main.go:143] libmachine: no network interface addresses found for domain addons-153147 (source=lease)
I1201 19:05:36.285770 17783 main.go:143] libmachine: trying to list again with source=arp
I1201 19:05:36.286035 17783 main.go:143] libmachine: unable to find current IP address of domain addons-153147 in network mk-addons-153147 (interfaces detected: [])
I1201 19:05:36.286065 17783 retry.go:31] will retry after 763.414346ms: waiting for domain to come up
I1201 19:05:37.052005 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:37.052989 17783 main.go:143] libmachine: no network interface addresses found for domain addons-153147 (source=lease)
I1201 19:05:37.053007 17783 main.go:143] libmachine: trying to list again with source=arp
I1201 19:05:37.053334 17783 main.go:143] libmachine: unable to find current IP address of domain addons-153147 in network mk-addons-153147 (interfaces detected: [])
I1201 19:05:37.053364 17783 retry.go:31] will retry after 1.423779057s: waiting for domain to come up
I1201 19:05:38.478416 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:38.478986 17783 main.go:143] libmachine: no network interface addresses found for domain addons-153147 (source=lease)
I1201 19:05:38.479012 17783 main.go:143] libmachine: trying to list again with source=arp
I1201 19:05:38.479258 17783 main.go:143] libmachine: unable to find current IP address of domain addons-153147 in network mk-addons-153147 (interfaces detected: [])
I1201 19:05:38.479294 17783 retry.go:31] will retry after 1.388017801s: waiting for domain to come up
I1201 19:05:39.868704 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:39.869213 17783 main.go:143] libmachine: no network interface addresses found for domain addons-153147 (source=lease)
I1201 19:05:39.869226 17783 main.go:143] libmachine: trying to list again with source=arp
I1201 19:05:39.869506 17783 main.go:143] libmachine: unable to find current IP address of domain addons-153147 in network mk-addons-153147 (interfaces detected: [])
I1201 19:05:39.869536 17783 retry.go:31] will retry after 2.181859207s: waiting for domain to come up
I1201 19:05:42.053090 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:42.053658 17783 main.go:143] libmachine: no network interface addresses found for domain addons-153147 (source=lease)
I1201 19:05:42.053672 17783 main.go:143] libmachine: trying to list again with source=arp
I1201 19:05:42.053966 17783 main.go:143] libmachine: unable to find current IP address of domain addons-153147 in network mk-addons-153147 (interfaces detected: [])
I1201 19:05:42.053994 17783 retry.go:31] will retry after 2.483985266s: waiting for domain to come up
I1201 19:05:44.539387 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:44.539921 17783 main.go:143] libmachine: no network interface addresses found for domain addons-153147 (source=lease)
I1201 19:05:44.539935 17783 main.go:143] libmachine: trying to list again with source=arp
I1201 19:05:44.540186 17783 main.go:143] libmachine: unable to find current IP address of domain addons-153147 in network mk-addons-153147 (interfaces detected: [])
I1201 19:05:44.540213 17783 retry.go:31] will retry after 3.116899486s: waiting for domain to come up
I1201 19:05:47.658994 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:47.659571 17783 main.go:143] libmachine: domain addons-153147 has current primary IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:47.659586 17783 main.go:143] libmachine: found domain IP: 192.168.39.9
I1201 19:05:47.659597 17783 main.go:143] libmachine: reserving static IP address...
I1201 19:05:47.660075 17783 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-153147", mac: "52:54:00:b9:bf:db", ip: "192.168.39.9"} in network mk-addons-153147
I1201 19:05:47.847637 17783 main.go:143] libmachine: reserved static IP address 192.168.39.9 for domain addons-153147
I1201 19:05:47.847661 17783 main.go:143] libmachine: waiting for SSH...
I1201 19:05:47.847669 17783 main.go:143] libmachine: Getting to WaitForSSH function...
I1201 19:05:47.850124 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:47.850532 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b9:bf:db}
I1201 19:05:47.850560 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:47.850766 17783 main.go:143] libmachine: Using SSH client type: native
I1201 19:05:47.850969 17783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.9 22 <nil> <nil>}
I1201 19:05:47.850978 17783 main.go:143] libmachine: About to run SSH command:
exit 0
I1201 19:05:47.961632 17783 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1201 19:05:47.961994 17783 main.go:143] libmachine: domain creation complete
I1201 19:05:47.963574 17783 machine.go:94] provisionDockerMachine start ...
I1201 19:05:47.966152 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:47.966564 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:05:47.966591 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:47.966739 17783 main.go:143] libmachine: Using SSH client type: native
I1201 19:05:47.966945 17783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.9 22 <nil> <nil>}
I1201 19:05:47.966958 17783 main.go:143] libmachine: About to run SSH command:
hostname
I1201 19:05:48.078505 17783 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
I1201 19:05:48.078531 17783 buildroot.go:166] provisioning hostname "addons-153147"
I1201 19:05:48.081192 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:48.081561 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:05:48.081586 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:48.081766 17783 main.go:143] libmachine: Using SSH client type: native
I1201 19:05:48.081982 17783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.9 22 <nil> <nil>}
I1201 19:05:48.081998 17783 main.go:143] libmachine: About to run SSH command:
sudo hostname addons-153147 && echo "addons-153147" | sudo tee /etc/hostname
I1201 19:05:48.211218 17783 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-153147
I1201 19:05:48.214103 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:48.214533 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:05:48.214563 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:48.214734 17783 main.go:143] libmachine: Using SSH client type: native
I1201 19:05:48.215049 17783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.9 22 <nil> <nil>}
I1201 19:05:48.215077 17783 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-153147' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-153147/g' /etc/hosts;
else
echo '127.0.1.1 addons-153147' | sudo tee -a /etc/hosts;
fi
fi
I1201 19:05:48.337672 17783 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1201 19:05:48.337698 17783 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12903/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12903/.minikube}
I1201 19:05:48.337744 17783 buildroot.go:174] setting up certificates
I1201 19:05:48.337754 17783 provision.go:84] configureAuth start
I1201 19:05:48.340636 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:48.341025 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:05:48.341059 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:48.343518 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:48.343934 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:05:48.343958 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:48.344096 17783 provision.go:143] copyHostCerts
I1201 19:05:48.344164 17783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12903/.minikube/key.pem (1675 bytes)
I1201 19:05:48.344288 17783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12903/.minikube/ca.pem (1078 bytes)
I1201 19:05:48.344348 17783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12903/.minikube/cert.pem (1123 bytes)
I1201 19:05:48.344420 17783 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca-key.pem org=jenkins.addons-153147 san=[127.0.0.1 192.168.39.9 addons-153147 localhost minikube]
I1201 19:05:48.586584 17783 provision.go:177] copyRemoteCerts
I1201 19:05:48.586636 17783 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1201 19:05:48.589191 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:48.589562 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:05:48.589582 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:48.589732 17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
I1201 19:05:48.673984 17783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1201 19:05:48.703274 17783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1201 19:05:48.732950 17783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1201 19:05:48.761737 17783 provision.go:87] duration metric: took 423.969079ms to configureAuth
I1201 19:05:48.761772 17783 buildroot.go:189] setting minikube options for container-runtime
I1201 19:05:48.761985 17783 config.go:182] Loaded profile config "addons-153147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 19:05:48.764885 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:48.765301 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:05:48.765331 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:48.765543 17783 main.go:143] libmachine: Using SSH client type: native
I1201 19:05:48.765754 17783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.9 22 <nil> <nil>}
I1201 19:05:48.765775 17783 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1201 19:05:48.995919 17783 main.go:143] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1201 19:05:48.995945 17783 machine.go:97] duration metric: took 1.032352169s to provisionDockerMachine
I1201 19:05:48.995957 17783 client.go:176] duration metric: took 17.358803255s to LocalClient.Create
I1201 19:05:48.995975 17783 start.go:167] duration metric: took 17.358856135s to libmachine.API.Create "addons-153147"
I1201 19:05:48.995984 17783 start.go:293] postStartSetup for "addons-153147" (driver="kvm2")
I1201 19:05:48.995998 17783 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1201 19:05:48.996063 17783 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1201 19:05:48.999169 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:48.999571 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:05:48.999598 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:48.999755 17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
I1201 19:05:49.085082 17783 ssh_runner.go:195] Run: cat /etc/os-release
I1201 19:05:49.090179 17783 info.go:137] Remote host: Buildroot 2025.02.8
I1201 19:05:49.090210 17783 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12903/.minikube/addons for local assets ...
I1201 19:05:49.090285 17783 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12903/.minikube/files for local assets ...
I1201 19:05:49.090311 17783 start.go:296] duration metric: took 94.320335ms for postStartSetup
I1201 19:05:49.093335 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:49.093679 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:05:49.093703 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:49.093923 17783 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/config.json ...
I1201 19:05:49.094095 17783 start.go:128] duration metric: took 17.458994341s to createHost
I1201 19:05:49.096259 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:49.096666 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:05:49.096692 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:49.096862 17783 main.go:143] libmachine: Using SSH client type: native
I1201 19:05:49.097036 17783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.9 22 <nil> <nil>}
I1201 19:05:49.097052 17783 main.go:143] libmachine: About to run SSH command:
date +%s.%N
I1201 19:05:49.203990 17783 main.go:143] libmachine: SSH cmd err, output: <nil>: 1764615949.160336512
I1201 19:05:49.204012 17783 fix.go:216] guest clock: 1764615949.160336512
I1201 19:05:49.204021 17783 fix.go:229] Guest: 2025-12-01 19:05:49.160336512 +0000 UTC Remote: 2025-12-01 19:05:49.094105721 +0000 UTC m=+17.552949213 (delta=66.230791ms)
I1201 19:05:49.204041 17783 fix.go:200] guest clock delta is within tolerance: 66.230791ms
I1201 19:05:49.204047 17783 start.go:83] releasing machines lock for "addons-153147", held for 17.569006332s
I1201 19:05:49.206481 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:49.206971 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:05:49.206998 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:49.207555 17783 ssh_runner.go:195] Run: cat /version.json
I1201 19:05:49.207633 17783 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1201 19:05:49.210535 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:49.210807 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:49.211006 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:05:49.211041 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:49.211219 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:05:49.211223 17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
I1201 19:05:49.211248 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:49.211402 17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
I1201 19:05:49.290335 17783 ssh_runner.go:195] Run: systemctl --version
I1201 19:05:49.327928 17783 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1201 19:05:49.488057 17783 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1201 19:05:49.494308 17783 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1201 19:05:49.494404 17783 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1201 19:05:49.514619 17783 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1201 19:05:49.514647 17783 start.go:496] detecting cgroup driver to use...
I1201 19:05:49.514715 17783 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1201 19:05:49.532179 17783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1201 19:05:49.549290 17783 docker.go:218] disabling cri-docker service (if available) ...
I1201 19:05:49.549358 17783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1201 19:05:49.565766 17783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1201 19:05:49.581413 17783 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1201 19:05:49.727729 17783 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1201 19:05:49.928358 17783 docker.go:234] disabling docker service ...
I1201 19:05:49.928420 17783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1201 19:05:49.944791 17783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1201 19:05:49.959738 17783 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1201 19:05:50.104541 17783 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1201 19:05:50.241878 17783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1201 19:05:50.257183 17783 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1201 19:05:50.279415 17783 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1201 19:05:50.279498 17783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1201 19:05:50.293978 17783 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1201 19:05:50.294041 17783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1201 19:05:50.306162 17783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1201 19:05:50.319407 17783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1201 19:05:50.332285 17783 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1201 19:05:50.345725 17783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1201 19:05:50.359119 17783 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1201 19:05:50.381056 17783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1201 19:05:50.393414 17783 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1201 19:05:50.405369 17783 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1201 19:05:50.405433 17783 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1201 19:05:50.428266 17783 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1201 19:05:50.443751 17783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1201 19:05:50.588529 17783 ssh_runner.go:195] Run: sudo systemctl restart crio
I1201 19:05:50.693392 17783 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1201 19:05:50.693486 17783 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1201 19:05:50.698540 17783 start.go:564] Will wait 60s for crictl version
I1201 19:05:50.698615 17783 ssh_runner.go:195] Run: which crictl
I1201 19:05:50.702571 17783 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1201 19:05:50.737545 17783 start.go:580] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1201 19:05:50.737656 17783 ssh_runner.go:195] Run: crio --version
I1201 19:05:50.767623 17783 ssh_runner.go:195] Run: crio --version
I1201 19:05:50.798792 17783 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
I1201 19:05:50.802480 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:50.802809 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:05:50.802851 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:05:50.803071 17783 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1201 19:05:50.807383 17783 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1201 19:05:50.821395 17783 kubeadm.go:884] updating cluster {Name:addons-153147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-153147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1201 19:05:50.821481 17783 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1201 19:05:50.821521 17783 ssh_runner.go:195] Run: sudo crictl images --output json
I1201 19:05:50.851316 17783 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
I1201 19:05:50.851423 17783 ssh_runner.go:195] Run: which lz4
I1201 19:05:50.855473 17783 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1201 19:05:50.859915 17783 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1201 19:05:50.859947 17783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
I1201 19:05:52.013121 17783 crio.go:462] duration metric: took 1.157673577s to copy over tarball
I1201 19:05:52.013200 17783 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1201 19:05:53.447967 17783 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.434744249s)
I1201 19:05:53.447990 17783 crio.go:469] duration metric: took 1.434841606s to extract the tarball
I1201 19:05:53.447996 17783 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1201 19:05:53.484551 17783 ssh_runner.go:195] Run: sudo crictl images --output json
I1201 19:05:53.522626 17783 crio.go:514] all images are preloaded for cri-o runtime.
I1201 19:05:53.522647 17783 cache_images.go:86] Images are preloaded, skipping loading
I1201 19:05:53.522655 17783 kubeadm.go:935] updating node { 192.168.39.9 8443 v1.34.2 crio true true} ...
I1201 19:05:53.522729 17783 kubeadm.go:947] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-153147 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.9
[Install]
config:
{KubernetesVersion:v1.34.2 ClusterName:addons-153147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1201 19:05:53.522789 17783 ssh_runner.go:195] Run: crio config
I1201 19:05:53.567884 17783 cni.go:84] Creating CNI manager for ""
I1201 19:05:53.567906 17783 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1201 19:05:53.567927 17783 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1201 19:05:53.567955 17783 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.9 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-153147 NodeName:addons-153147 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.9"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.9 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1201 19:05:53.568074 17783 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.9
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-153147"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.9"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.9"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1201 19:05:53.568131 17783 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
I1201 19:05:53.579573 17783 binaries.go:51] Found k8s binaries, skipping transfer
I1201 19:05:53.579629 17783 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1201 19:05:53.590726 17783 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
I1201 19:05:53.610871 17783 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1201 19:05:53.631337 17783 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
I1201 19:05:53.651765 17783 ssh_runner.go:195] Run: grep 192.168.39.9 control-plane.minikube.internal$ /etc/hosts
I1201 19:05:53.655721 17783 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.9 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1201 19:05:53.670504 17783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1201 19:05:53.808706 17783 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1201 19:05:53.837003 17783 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147 for IP: 192.168.39.9
I1201 19:05:53.837025 17783 certs.go:195] generating shared ca certs ...
I1201 19:05:53.837047 17783 certs.go:227] acquiring lock for ca certs: {Name:mk7e1ff47c53decb016970932c61ce60ac92f0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1201 19:05:53.837193 17783 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12903/.minikube/ca.key
I1201 19:05:53.894756 17783 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt ...
I1201 19:05:53.894783 17783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt: {Name:mk9d92e4ed7e08dd0b90f17ae2238e4b3cab654f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1201 19:05:53.894965 17783 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12903/.minikube/ca.key ...
I1201 19:05:53.894977 17783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/ca.key: {Name:mkef5ca972f1c69a34c7abb8ad1cfe5908f2c969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1201 19:05:53.895051 17783 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.key
I1201 19:05:54.008875 17783 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.crt ...
I1201 19:05:54.008899 17783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.crt: {Name:mk57e693a03a2819def8c3cf0c009113054618ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1201 19:05:54.009057 17783 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.key ...
I1201 19:05:54.009068 17783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.key: {Name:mk1c9e3ef68f6fdd21e9d3833c157a47757f195c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1201 19:05:54.009133 17783 certs.go:257] generating profile certs ...
I1201 19:05:54.009180 17783 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.key
I1201 19:05:54.009194 17783 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt with IP's: []
I1201 19:05:54.209034 17783 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt ...
I1201 19:05:54.209068 17783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: {Name:mk42d80b6d9c11d66552eaaf3a875bce22bfb0f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1201 19:05:54.209710 17783 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.key ...
I1201 19:05:54.209728 17783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.key: {Name:mkdf76aa61afbf60ff90312f9447b1ce21ead418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1201 19:05:54.209868 17783 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/apiserver.key.ea1010ab
I1201 19:05:54.209892 17783 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/apiserver.crt.ea1010ab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.9]
I1201 19:05:54.290382 17783 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/apiserver.crt.ea1010ab ...
I1201 19:05:54.290408 17783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/apiserver.crt.ea1010ab: {Name:mkbb09da0d23c7ccd21267c6f7310ddc23bc0f80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1201 19:05:54.290563 17783 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/apiserver.key.ea1010ab ...
I1201 19:05:54.290576 17783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/apiserver.key.ea1010ab: {Name:mk563d827d9c5afb8b9cf8238ec44bfa097e94c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1201 19:05:54.290647 17783 certs.go:382] copying /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/apiserver.crt.ea1010ab -> /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/apiserver.crt
I1201 19:05:54.291216 17783 certs.go:386] copying /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/apiserver.key.ea1010ab -> /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/apiserver.key
I1201 19:05:54.291290 17783 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/proxy-client.key
I1201 19:05:54.291310 17783 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/proxy-client.crt with IP's: []
I1201 19:05:54.336866 17783 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/proxy-client.crt ...
I1201 19:05:54.336896 17783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/proxy-client.crt: {Name:mkd0ff1eba9b217ab374efa12ac807423e770c6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1201 19:05:54.337062 17783 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/proxy-client.key ...
I1201 19:05:54.337074 17783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/proxy-client.key: {Name:mk0d95ada5a63120bc1d07e56cc5ac788f250ee8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1201 19:05:54.337250 17783 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca-key.pem (1679 bytes)
I1201 19:05:54.337287 17783 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem (1078 bytes)
I1201 19:05:54.337312 17783 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem (1123 bytes)
I1201 19:05:54.337334 17783 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/key.pem (1675 bytes)
I1201 19:05:54.337822 17783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1201 19:05:54.368457 17783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1201 19:05:54.397140 17783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1201 19:05:54.426223 17783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1201 19:05:54.454176 17783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1201 19:05:54.482700 17783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1201 19:05:54.515950 17783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1201 19:05:54.554672 17783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1201 19:05:54.585442 17783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1201 19:05:54.614608 17783 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1201 19:05:54.634421 17783 ssh_runner.go:195] Run: openssl version
I1201 19:05:54.640647 17783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1201 19:05:54.653314 17783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1201 19:05:54.658195 17783 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 1 19:05 /usr/share/ca-certificates/minikubeCA.pem
I1201 19:05:54.658245 17783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1201 19:05:54.665193 17783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1201 19:05:54.677859 17783 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1201 19:05:54.682534 17783 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1201 19:05:54.682596 17783 kubeadm.go:401] StartCluster: {Name:addons-153147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-153147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1201 19:05:54.682680 17783 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1201 19:05:54.682759 17783 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1201 19:05:54.718913 17783 cri.go:89] found id: ""
I1201 19:05:54.718992 17783 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1201 19:05:54.731282 17783 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1201 19:05:54.742651 17783 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1201 19:05:54.754291 17783 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1201 19:05:54.754310 17783 kubeadm.go:158] found existing configuration files:
I1201 19:05:54.754353 17783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1201 19:05:54.764931 17783 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1201 19:05:54.765007 17783 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1201 19:05:54.776128 17783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1201 19:05:54.786247 17783 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1201 19:05:54.786318 17783 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1201 19:05:54.797437 17783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1201 19:05:54.807692 17783 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1201 19:05:54.807757 17783 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1201 19:05:54.818667 17783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1201 19:05:54.829163 17783 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1201 19:05:54.829234 17783 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1201 19:05:54.840332 17783 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1201 19:05:54.980139 17783 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1201 19:06:07.007304 17783 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
I1201 19:06:07.007390 17783 kubeadm.go:319] [preflight] Running pre-flight checks
I1201 19:06:07.007497 17783 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1201 19:06:07.007612 17783 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1201 19:06:07.007691 17783 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1201 19:06:07.007769 17783 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1201 19:06:07.009237 17783 out.go:252] - Generating certificates and keys ...
I1201 19:06:07.009314 17783 kubeadm.go:319] [certs] Using existing ca certificate authority
I1201 19:06:07.009382 17783 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1201 19:06:07.009461 17783 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1201 19:06:07.009511 17783 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1201 19:06:07.009566 17783 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1201 19:06:07.009613 17783 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1201 19:06:07.009660 17783 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1201 19:06:07.009782 17783 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-153147 localhost] and IPs [192.168.39.9 127.0.0.1 ::1]
I1201 19:06:07.009869 17783 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1201 19:06:07.010017 17783 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-153147 localhost] and IPs [192.168.39.9 127.0.0.1 ::1]
I1201 19:06:07.010089 17783 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1201 19:06:07.010150 17783 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1201 19:06:07.010198 17783 kubeadm.go:319] [certs] Generating "sa" key and public key
I1201 19:06:07.010252 17783 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1201 19:06:07.010314 17783 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1201 19:06:07.010380 17783 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1201 19:06:07.010463 17783 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1201 19:06:07.010547 17783 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1201 19:06:07.010631 17783 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1201 19:06:07.010742 17783 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1201 19:06:07.010852 17783 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1201 19:06:07.012226 17783 out.go:252] - Booting up control plane ...
I1201 19:06:07.012352 17783 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1201 19:06:07.012475 17783 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1201 19:06:07.012579 17783 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1201 19:06:07.012738 17783 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1201 19:06:07.012887 17783 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1201 19:06:07.013031 17783 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1201 19:06:07.013150 17783 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1201 19:06:07.013195 17783 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1201 19:06:07.013305 17783 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1201 19:06:07.013472 17783 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1201 19:06:07.013533 17783 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00161678s
I1201 19:06:07.013651 17783 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1201 19:06:07.013753 17783 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.9:8443/livez
I1201 19:06:07.013891 17783 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1201 19:06:07.013991 17783 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1201 19:06:07.014112 17783 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.618952021s
I1201 19:06:07.014222 17783 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.636160671s
I1201 19:06:07.014332 17783 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501932664s
I1201 19:06:07.014462 17783 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1201 19:06:07.014639 17783 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1201 19:06:07.014719 17783 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1201 19:06:07.014971 17783 kubeadm.go:319] [mark-control-plane] Marking the node addons-153147 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1201 19:06:07.015039 17783 kubeadm.go:319] [bootstrap-token] Using token: 7vt6ii.w2s814lac513ec53
I1201 19:06:07.016494 17783 out.go:252] - Configuring RBAC rules ...
I1201 19:06:07.016589 17783 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1201 19:06:07.016662 17783 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1201 19:06:07.016821 17783 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1201 19:06:07.017001 17783 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1201 19:06:07.017150 17783 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1201 19:06:07.017275 17783 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1201 19:06:07.017469 17783 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1201 19:06:07.017556 17783 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1201 19:06:07.017646 17783 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1201 19:06:07.017656 17783 kubeadm.go:319]
I1201 19:06:07.017758 17783 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1201 19:06:07.017775 17783 kubeadm.go:319]
I1201 19:06:07.017897 17783 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1201 19:06:07.017909 17783 kubeadm.go:319]
I1201 19:06:07.017949 17783 kubeadm.go:319] mkdir -p $HOME/.kube
I1201 19:06:07.018037 17783 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1201 19:06:07.018107 17783 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1201 19:06:07.018114 17783 kubeadm.go:319]
I1201 19:06:07.018160 17783 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1201 19:06:07.018166 17783 kubeadm.go:319]
I1201 19:06:07.018220 17783 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1201 19:06:07.018229 17783 kubeadm.go:319]
I1201 19:06:07.018307 17783 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1201 19:06:07.018391 17783 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1201 19:06:07.018486 17783 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1201 19:06:07.018516 17783 kubeadm.go:319]
I1201 19:06:07.018622 17783 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1201 19:06:07.018727 17783 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1201 19:06:07.018743 17783 kubeadm.go:319]
I1201 19:06:07.018857 17783 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 7vt6ii.w2s814lac513ec53 \
I1201 19:06:07.018946 17783 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:f7850289782e26755534bbd10a21d664dd20b89a823d3fd24570eae03b241557 \
I1201 19:06:07.018964 17783 kubeadm.go:319] --control-plane
I1201 19:06:07.018967 17783 kubeadm.go:319]
I1201 19:06:07.019038 17783 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1201 19:06:07.019043 17783 kubeadm.go:319]
I1201 19:06:07.019118 17783 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 7vt6ii.w2s814lac513ec53 \
I1201 19:06:07.019225 17783 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:f7850289782e26755534bbd10a21d664dd20b89a823d3fd24570eae03b241557
I1201 19:06:07.019248 17783 cni.go:84] Creating CNI manager for ""
I1201 19:06:07.019255 17783 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1201 19:06:07.020633 17783 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1201 19:06:07.021733 17783 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1201 19:06:07.035749 17783 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1201 19:06:07.063357 17783 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1201 19:06:07.063458 17783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1201 19:06:07.063477 17783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-153147 minikube.k8s.io/updated_at=2025_12_01T19_06_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9 minikube.k8s.io/name=addons-153147 minikube.k8s.io/primary=true
I1201 19:06:07.187074 17783 ops.go:34] apiserver oom_adj: -16
I1201 19:06:07.187138 17783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1201 19:06:07.687221 17783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1201 19:06:08.187444 17783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1201 19:06:08.687566 17783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1201 19:06:09.187348 17783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1201 19:06:09.687359 17783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1201 19:06:10.188197 17783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1201 19:06:10.687490 17783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1201 19:06:11.188027 17783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1201 19:06:11.687755 17783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1201 19:06:12.188018 17783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1201 19:06:12.361106 17783 kubeadm.go:1114] duration metric: took 5.297716741s to wait for elevateKubeSystemPrivileges
I1201 19:06:12.361145 17783 kubeadm.go:403] duration metric: took 17.678554909s to StartCluster
I1201 19:06:12.361185 17783 settings.go:142] acquiring lock: {Name:mk63d3c798c3f817a653e3e39f757c57080fff76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1201 19:06:12.361318 17783 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/21997-12903/kubeconfig
I1201 19:06:12.361798 17783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/kubeconfig: {Name:mkf67691ba90fcc0b34f838eaae92a26f4e31096 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1201 19:06:12.362047 17783 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1201 19:06:12.362078 17783 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1201 19:06:12.362163 17783 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1201 19:06:12.362310 17783 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-153147"
I1201 19:06:12.362324 17783 addons.go:70] Setting yakd=true in profile "addons-153147"
I1201 19:06:12.362624 17783 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-153147"
I1201 19:06:12.362673 17783 host.go:66] Checking if "addons-153147" exists ...
I1201 19:06:12.362698 17783 addons.go:239] Setting addon yakd=true in "addons-153147"
I1201 19:06:12.362874 17783 host.go:66] Checking if "addons-153147" exists ...
I1201 19:06:12.362913 17783 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-153147"
I1201 19:06:12.362938 17783 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-153147"
I1201 19:06:12.362955 17783 config.go:182] Loaded profile config "addons-153147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 19:06:12.362980 17783 addons.go:70] Setting cloud-spanner=true in profile "addons-153147"
I1201 19:06:12.363991 17783 addons.go:239] Setting addon cloud-spanner=true in "addons-153147"
I1201 19:06:12.364044 17783 host.go:66] Checking if "addons-153147" exists ...
I1201 19:06:12.363000 17783 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-153147"
I1201 19:06:12.364497 17783 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-153147"
I1201 19:06:12.364523 17783 host.go:66] Checking if "addons-153147" exists ...
I1201 19:06:12.363010 17783 addons.go:70] Setting default-storageclass=true in profile "addons-153147"
I1201 19:06:12.364594 17783 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-153147"
I1201 19:06:12.363019 17783 addons.go:70] Setting inspektor-gadget=true in profile "addons-153147"
I1201 19:06:12.364991 17783 addons.go:239] Setting addon inspektor-gadget=true in "addons-153147"
I1201 19:06:12.363022 17783 addons.go:70] Setting ingress-dns=true in profile "addons-153147"
I1201 19:06:12.363031 17783 addons.go:70] Setting gcp-auth=true in profile "addons-153147"
I1201 19:06:12.363027 17783 addons.go:70] Setting metrics-server=true in profile "addons-153147"
I1201 19:06:12.363040 17783 addons.go:70] Setting ingress=true in profile "addons-153147"
I1201 19:06:12.363044 17783 addons.go:70] Setting storage-provisioner=true in profile "addons-153147"
I1201 19:06:12.363069 17783 host.go:66] Checking if "addons-153147" exists ...
I1201 19:06:12.363107 17783 addons.go:70] Setting registry-creds=true in profile "addons-153147"
I1201 19:06:12.363107 17783 addons.go:70] Setting registry=true in profile "addons-153147"
I1201 19:06:12.363163 17783 addons.go:70] Setting volumesnapshots=true in profile "addons-153147"
I1201 19:06:12.363223 17783 addons.go:70] Setting volcano=true in profile "addons-153147"
I1201 19:06:12.363335 17783 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-153147"
I1201 19:06:12.365083 17783 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-153147"
I1201 19:06:12.365140 17783 addons.go:239] Setting addon ingress=true in "addons-153147"
I1201 19:06:12.365166 17783 addons.go:239] Setting addon ingress-dns=true in "addons-153147"
I1201 19:06:12.365189 17783 host.go:66] Checking if "addons-153147" exists ...
I1201 19:06:12.365200 17783 host.go:66] Checking if "addons-153147" exists ...
I1201 19:06:12.365362 17783 addons.go:239] Setting addon registry=true in "addons-153147"
I1201 19:06:12.365434 17783 host.go:66] Checking if "addons-153147" exists ...
I1201 19:06:12.365465 17783 out.go:179] * Verifying Kubernetes components...
I1201 19:06:12.365093 17783 addons.go:239] Setting addon storage-provisioner=true in "addons-153147"
I1201 19:06:12.365921 17783 host.go:66] Checking if "addons-153147" exists ...
I1201 19:06:12.366131 17783 addons.go:239] Setting addon registry-creds=true in "addons-153147"
I1201 19:06:12.366165 17783 host.go:66] Checking if "addons-153147" exists ...
I1201 19:06:12.365118 17783 mustload.go:66] Loading cluster: addons-153147
I1201 19:06:12.365129 17783 addons.go:239] Setting addon metrics-server=true in "addons-153147"
I1201 19:06:12.366182 17783 addons.go:239] Setting addon volumesnapshots=true in "addons-153147"
I1201 19:06:12.366193 17783 host.go:66] Checking if "addons-153147" exists ...
I1201 19:06:12.366205 17783 host.go:66] Checking if "addons-153147" exists ...
I1201 19:06:12.365147 17783 host.go:66] Checking if "addons-153147" exists ...
I1201 19:06:12.366690 17783 addons.go:239] Setting addon volcano=true in "addons-153147"
I1201 19:06:12.366766 17783 host.go:66] Checking if "addons-153147" exists ...
I1201 19:06:12.367386 17783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1201 19:06:12.367879 17783 config.go:182] Loaded profile config "addons-153147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 19:06:12.374765 17783 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-153147"
I1201 19:06:12.374768 17783 addons.go:239] Setting addon default-storageclass=true in "addons-153147"
I1201 19:06:12.374804 17783 host.go:66] Checking if "addons-153147" exists ...
I1201 19:06:12.374811 17783 host.go:66] Checking if "addons-153147" exists ...
I1201 19:06:12.375506 17783 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.5
I1201 19:06:12.375516 17783 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1201 19:06:12.375546 17783 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
I1201 19:06:12.375613 17783 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
I1201 19:06:12.375622 17783 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
I1201 19:06:12.376703 17783 out.go:179] - Using image docker.io/registry:3.0.0
I1201 19:06:12.376714 17783 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
W1201 19:06:12.377151 17783 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1201 19:06:12.377505 17783 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1201 19:06:12.377526 17783 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1201 19:06:12.377560 17783 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
I1201 19:06:12.378876 17783 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1201 19:06:12.378903 17783 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1201 19:06:12.378922 17783 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1201 19:06:12.377563 17783 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
I1201 19:06:12.378979 17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1201 19:06:12.377587 17783 host.go:66] Checking if "addons-153147" exists ...
I1201 19:06:12.378383 17783 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1201 19:06:12.378399 17783 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1201 19:06:12.378413 17783 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1201 19:06:12.379437 17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1201 19:06:12.379634 17783 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
I1201 19:06:12.379670 17783 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1201 19:06:12.379676 17783 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1201 19:06:12.379681 17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1201 19:06:12.380578 17783 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1201 19:06:12.380598 17783 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1201 19:06:12.380611 17783 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1201 19:06:12.380577 17783 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
I1201 19:06:12.380601 17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1201 19:06:12.381259 17783 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1201 19:06:12.381269 17783 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1201 19:06:12.381285 17783 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1201 19:06:12.381290 17783 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1201 19:06:12.381806 17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1201 19:06:12.381336 17783 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1201 19:06:12.381953 17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1201 19:06:12.381371 17783 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
I1201 19:06:12.381988 17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
I1201 19:06:12.382656 17783 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1201 19:06:12.382674 17783 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1201 19:06:12.382741 17783 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
I1201 19:06:12.382809 17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1201 19:06:12.383597 17783 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
I1201 19:06:12.383616 17783 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1201 19:06:12.383631 17783 out.go:179] - Using image docker.io/busybox:stable
I1201 19:06:12.384787 17783 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1201 19:06:12.384798 17783 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1201 19:06:12.384803 17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1201 19:06:12.384812 17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1201 19:06:12.385008 17783 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1201 19:06:12.386372 17783 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1201 19:06:12.387616 17783 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1201 19:06:12.388532 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.388994 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.389429 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.389769 17783 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1201 19:06:12.390209 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.390462 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:06:12.390502 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.390823 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:06:12.390867 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.391045 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.391141 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:06:12.391182 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.391394 17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
I1201 19:06:12.391795 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.391853 17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
I1201 19:06:12.391897 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:06:12.391921 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.392205 17783 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1201 19:06:12.392375 17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
I1201 19:06:12.392841 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:06:12.392854 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.392864 17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
I1201 19:06:12.392876 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.393495 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:06:12.393505 17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
I1201 19:06:12.393498 17783 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1201 19:06:12.393535 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.393541 17783 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1201 19:06:12.394018 17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
I1201 19:06:12.394029 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:06:12.394083 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.394281 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.394404 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.395009 17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
I1201 19:06:12.395268 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.395634 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:06:12.395664 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.395713 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:06:12.395748 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.395778 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.396038 17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
I1201 19:06:12.396048 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.396353 17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
I1201 19:06:12.396570 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.397072 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.397074 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:06:12.397181 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.397245 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:06:12.397277 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.397354 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:06:12.397387 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.397393 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:06:12.397416 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.397426 17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
I1201 19:06:12.397707 17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
I1201 19:06:12.397714 17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
I1201 19:06:12.397746 17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
I1201 19:06:12.398155 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:06:12.398193 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.398555 17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
I1201 19:06:12.399788 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.400194 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:06:12.400226 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:12.400367 17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
W1201 19:06:12.769755 17783 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40750->192.168.39.9:22: read: connection reset by peer
I1201 19:06:12.769786 17783 retry.go:31] will retry after 332.254015ms: ssh: handshake failed: read tcp 192.168.39.1:40750->192.168.39.9:22: read: connection reset by peer
I1201 19:06:13.378707 17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1201 19:06:13.379229 17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1201 19:06:13.383792 17783 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1201 19:06:13.383816 17783 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1201 19:06:13.399655 17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1201 19:06:13.450814 17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1201 19:06:13.461244 17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1201 19:06:13.501944 17783 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
I1201 19:06:13.501976 17783 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1201 19:06:13.506703 17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1201 19:06:13.531015 17783 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.16894159s)
I1201 19:06:13.531119 17783 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.163707502s)
I1201 19:06:13.531167 17783 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1201 19:06:13.531190 17783 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1201 19:06:13.578443 17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
I1201 19:06:13.653139 17783 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
I1201 19:06:13.653166 17783 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1201 19:06:13.657848 17783 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1201 19:06:13.657872 17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1201 19:06:13.657851 17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1201 19:06:13.695553 17783 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1201 19:06:13.695578 17783 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1201 19:06:13.733632 17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1201 19:06:13.736993 17783 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
I1201 19:06:13.737022 17783 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1201 19:06:13.739912 17783 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1201 19:06:13.739939 17783 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1201 19:06:13.926366 17783 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1201 19:06:13.926395 17783 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1201 19:06:13.932656 17783 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1201 19:06:13.932684 17783 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1201 19:06:13.944186 17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1201 19:06:13.969787 17783 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
I1201 19:06:13.969823 17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1201 19:06:14.017776 17783 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1201 19:06:14.017809 17783 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1201 19:06:14.055367 17783 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
I1201 19:06:14.055400 17783 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1201 19:06:14.177601 17783 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1201 19:06:14.177630 17783 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1201 19:06:14.196098 17783 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1201 19:06:14.196125 17783 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1201 19:06:14.273628 17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1201 19:06:14.277136 17783 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1201 19:06:14.277165 17783 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1201 19:06:14.356566 17783 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
I1201 19:06:14.356588 17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1201 19:06:14.460481 17783 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1201 19:06:14.460512 17783 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1201 19:06:14.552963 17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1201 19:06:14.675511 17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1201 19:06:14.720360 17783 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1201 19:06:14.720381 17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1201 19:06:14.901038 17783 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1201 19:06:14.901064 17783 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1201 19:06:15.110995 17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1201 19:06:15.175784 17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.797042055s)
I1201 19:06:15.280604 17783 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1201 19:06:15.280635 17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1201 19:06:15.716850 17783 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1201 19:06:15.716874 17783 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1201 19:06:16.101042 17783 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1201 19:06:16.101066 17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1201 19:06:16.751087 17783 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1201 19:06:16.751112 17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1201 19:06:17.098207 17783 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1201 19:06:17.098239 17783 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1201 19:06:17.378938 17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1201 19:06:18.176787 17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.797526868s)
I1201 19:06:18.176856 17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.777171594s)
I1201 19:06:18.176895 17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.726030797s)
I1201 19:06:18.305157 17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.798414642s)
I1201 19:06:18.305202 17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.843918943s)
I1201 19:06:18.305235 17783 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.774031567s)
I1201 19:06:18.305292 17783 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.774103667s)
I1201 19:06:18.305317 17783 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1201 19:06:18.306114 17783 node_ready.go:35] waiting up to 6m0s for node "addons-153147" to be "Ready" ...
I1201 19:06:18.348113 17783 node_ready.go:49] node "addons-153147" is "Ready"
I1201 19:06:18.348146 17783 node_ready.go:38] duration metric: took 42.004217ms for node "addons-153147" to be "Ready" ...
I1201 19:06:18.348162 17783 api_server.go:52] waiting for apiserver process to appear ...
I1201 19:06:18.348300 17783 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1201 19:06:18.941724 17783 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-153147" context rescaled to 1 replicas
I1201 19:06:19.193426 17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (5.614952842s)
I1201 19:06:19.193517 17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.535592045s)
I1201 19:06:19.832025 17783 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1201 19:06:19.834709 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:19.835083 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:06:19.835106 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:19.835267 17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
I1201 19:06:20.036395 17783 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1201 19:06:20.137394 17783 addons.go:239] Setting addon gcp-auth=true in "addons-153147"
I1201 19:06:20.137446 17783 host.go:66] Checking if "addons-153147" exists ...
I1201 19:06:20.139423 17783 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1201 19:06:20.141876 17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:20.142366 17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
I1201 19:06:20.142406 17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
I1201 19:06:20.142605 17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
I1201 19:06:20.756451 17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.022772928s)
I1201 19:06:20.756496 17783 addons.go:495] Verifying addon ingress=true in "addons-153147"
I1201 19:06:20.756548 17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.812328084s)
I1201 19:06:20.756613 17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.482950749s)
I1201 19:06:20.756641 17783 addons.go:495] Verifying addon registry=true in "addons-153147"
I1201 19:06:20.756729 17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.203734541s)
I1201 19:06:20.756752 17783 addons.go:495] Verifying addon metrics-server=true in "addons-153147"
I1201 19:06:20.756799 17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.081249156s)
I1201 19:06:20.758453 17783 out.go:179] * Verifying ingress addon...
I1201 19:06:20.758458 17783 out.go:179] * Verifying registry addon...
I1201 19:06:20.759294 17783 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-153147 service yakd-dashboard -n yakd-dashboard
I1201 19:06:20.760993 17783 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1201 19:06:20.761074 17783 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1201 19:06:20.812656 17783 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1201 19:06:20.812682 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:20.812698 17783 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1201 19:06:20.812709 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:20.945908 17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.834868464s)
W1201 19:06:20.945943 17783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1201 19:06:20.945963 17783 retry.go:31] will retry after 317.591372ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1201 19:06:21.263726 17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1201 19:06:21.280407 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:21.280730 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:21.770687 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:21.770690 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:22.114060 17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.735074442s)
I1201 19:06:22.114102 17783 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-153147"
I1201 19:06:22.114117 17783 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.765792041s)
I1201 19:06:22.114147 17783 api_server.go:72] duration metric: took 9.752045154s to wait for apiserver process to appear ...
I1201 19:06:22.114156 17783 api_server.go:88] waiting for apiserver healthz status ...
I1201 19:06:22.114175 17783 api_server.go:253] Checking apiserver healthz at https://192.168.39.9:8443/healthz ...
I1201 19:06:22.114185 17783 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.97473627s)
I1201 19:06:22.117548 17783 out.go:179] * Verifying csi-hostpath-driver addon...
I1201 19:06:22.117582 17783 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
I1201 19:06:22.119693 17783 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1201 19:06:22.120393 17783 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1201 19:06:22.121135 17783 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1201 19:06:22.121154 17783 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1201 19:06:22.169513 17783 api_server.go:279] https://192.168.39.9:8443/healthz returned 200:
ok
I1201 19:06:22.170856 17783 api_server.go:141] control plane version: v1.34.2
I1201 19:06:22.170891 17783 api_server.go:131] duration metric: took 56.726559ms to wait for apiserver health ...
I1201 19:06:22.170904 17783 system_pods.go:43] waiting for kube-system pods to appear ...
I1201 19:06:22.196484 17783 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1201 19:06:22.196513 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:22.196788 17783 system_pods.go:59] 20 kube-system pods found
I1201 19:06:22.196817 17783 system_pods.go:61] "amd-gpu-device-plugin-nh9fh" [19ed7c27-42bf-429e-a659-5cab61a37789] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1201 19:06:22.196839 17783 system_pods.go:61] "coredns-66bc5c9577-7bgbb" [d82083d8-b7a2-4608-8b02-e6bbf9976482] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1201 19:06:22.196846 17783 system_pods.go:61] "coredns-66bc5c9577-qthgq" [1c971b48-0414-4686-9897-a70b10f42b2f] Running
I1201 19:06:22.196852 17783 system_pods.go:61] "csi-hostpath-attacher-0" [a0da7e77-faf6-4065-9d43-305953b2e6e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1201 19:06:22.196857 17783 system_pods.go:61] "csi-hostpath-resizer-0" [e7a300b2-e469-4b5d-9ebc-f37fda2db088] Pending
I1201 19:06:22.196862 17783 system_pods.go:61] "csi-hostpathplugin-x97sg" [11625919-d915-4098-abc3-6638f492f692] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1201 19:06:22.196867 17783 system_pods.go:61] "etcd-addons-153147" [fe1b4837-c595-4349-a8ec-771f6514e48d] Running
I1201 19:06:22.196872 17783 system_pods.go:61] "kube-apiserver-addons-153147" [13a5d41f-f476-4996-a51b-61e6297cd643] Running
I1201 19:06:22.196875 17783 system_pods.go:61] "kube-controller-manager-addons-153147" [66972fb7-9f43-4a64-babd-2a9ead11665a] Running
I1201 19:06:22.196880 17783 system_pods.go:61] "kube-ingress-dns-minikube" [ada2334d-7448-402c-ba30-9ea15e6fe684] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1201 19:06:22.196884 17783 system_pods.go:61] "kube-proxy-9z5zn" [05f6dd4e-50d1-437b-b0f6-8f7f30ef91f8] Running
I1201 19:06:22.196887 17783 system_pods.go:61] "kube-scheduler-addons-153147" [acdfbb0f-99cf-44e1-b6fc-2157e5de13bb] Running
I1201 19:06:22.196892 17783 system_pods.go:61] "metrics-server-85b7d694d7-r5qgp" [776145bf-6b03-48e3-bbd9-1460bb1d5b86] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1201 19:06:22.196897 17783 system_pods.go:61] "nvidia-device-plugin-daemonset-rcdwp" [42b47333-4324-46b0-9473-d92effc8cb10] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1201 19:06:22.196906 17783 system_pods.go:61] "registry-6b586f9694-mfkdk" [11619fff-1af5-4b33-8893-bcb6ad33587c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1201 19:06:22.196912 17783 system_pods.go:61] "registry-creds-764b6fb674-xdgz5" [c4a135e2-6714-483d-92c9-5a727086d4c0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1201 19:06:22.196917 17783 system_pods.go:61] "registry-proxy-pw4sl" [5755be46-29a3-4a7e-9349-89d5d6200020] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1201 19:06:22.196922 17783 system_pods.go:61] "snapshot-controller-7d9fbc56b8-5ddbm" [6006f1a2-b8bd-4d10-9265-4313f7d610bd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1201 19:06:22.196931 17783 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wmd4x" [5f93acba-a273-49d3-ab26-c30d4f16d840] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1201 19:06:22.196936 17783 system_pods.go:61] "storage-provisioner" [366028de-640e-4307-982b-f015bfda82d0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1201 19:06:22.196941 17783 system_pods.go:74] duration metric: took 26.030554ms to wait for pod list to return data ...
I1201 19:06:22.196948 17783 default_sa.go:34] waiting for default service account to be created ...
I1201 19:06:22.216209 17783 default_sa.go:45] found service account: "default"
I1201 19:06:22.216244 17783 default_sa.go:55] duration metric: took 19.285956ms for default service account to be created ...
I1201 19:06:22.216258 17783 system_pods.go:116] waiting for k8s-apps to be running ...
I1201 19:06:22.221283 17783 system_pods.go:86] 20 kube-system pods found
I1201 19:06:22.221322 17783 system_pods.go:89] "amd-gpu-device-plugin-nh9fh" [19ed7c27-42bf-429e-a659-5cab61a37789] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1201 19:06:22.221333 17783 system_pods.go:89] "coredns-66bc5c9577-7bgbb" [d82083d8-b7a2-4608-8b02-e6bbf9976482] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1201 19:06:22.221342 17783 system_pods.go:89] "coredns-66bc5c9577-qthgq" [1c971b48-0414-4686-9897-a70b10f42b2f] Running
I1201 19:06:22.221350 17783 system_pods.go:89] "csi-hostpath-attacher-0" [a0da7e77-faf6-4065-9d43-305953b2e6e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1201 19:06:22.221357 17783 system_pods.go:89] "csi-hostpath-resizer-0" [e7a300b2-e469-4b5d-9ebc-f37fda2db088] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1201 19:06:22.221365 17783 system_pods.go:89] "csi-hostpathplugin-x97sg" [11625919-d915-4098-abc3-6638f492f692] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1201 19:06:22.221374 17783 system_pods.go:89] "etcd-addons-153147" [fe1b4837-c595-4349-a8ec-771f6514e48d] Running
I1201 19:06:22.221380 17783 system_pods.go:89] "kube-apiserver-addons-153147" [13a5d41f-f476-4996-a51b-61e6297cd643] Running
I1201 19:06:22.221390 17783 system_pods.go:89] "kube-controller-manager-addons-153147" [66972fb7-9f43-4a64-babd-2a9ead11665a] Running
I1201 19:06:22.221399 17783 system_pods.go:89] "kube-ingress-dns-minikube" [ada2334d-7448-402c-ba30-9ea15e6fe684] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1201 19:06:22.221407 17783 system_pods.go:89] "kube-proxy-9z5zn" [05f6dd4e-50d1-437b-b0f6-8f7f30ef91f8] Running
I1201 19:06:22.221414 17783 system_pods.go:89] "kube-scheduler-addons-153147" [acdfbb0f-99cf-44e1-b6fc-2157e5de13bb] Running
I1201 19:06:22.221424 17783 system_pods.go:89] "metrics-server-85b7d694d7-r5qgp" [776145bf-6b03-48e3-bbd9-1460bb1d5b86] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1201 19:06:22.221434 17783 system_pods.go:89] "nvidia-device-plugin-daemonset-rcdwp" [42b47333-4324-46b0-9473-d92effc8cb10] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1201 19:06:22.221443 17783 system_pods.go:89] "registry-6b586f9694-mfkdk" [11619fff-1af5-4b33-8893-bcb6ad33587c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1201 19:06:22.221451 17783 system_pods.go:89] "registry-creds-764b6fb674-xdgz5" [c4a135e2-6714-483d-92c9-5a727086d4c0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1201 19:06:22.221461 17783 system_pods.go:89] "registry-proxy-pw4sl" [5755be46-29a3-4a7e-9349-89d5d6200020] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1201 19:06:22.221469 17783 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5ddbm" [6006f1a2-b8bd-4d10-9265-4313f7d610bd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1201 19:06:22.221481 17783 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wmd4x" [5f93acba-a273-49d3-ab26-c30d4f16d840] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1201 19:06:22.221489 17783 system_pods.go:89] "storage-provisioner" [366028de-640e-4307-982b-f015bfda82d0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1201 19:06:22.221499 17783 system_pods.go:126] duration metric: took 5.233511ms to wait for k8s-apps to be running ...
I1201 19:06:22.221509 17783 system_svc.go:44] waiting for kubelet service to be running ....
I1201 19:06:22.221561 17783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1201 19:06:22.268147 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:22.272023 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:22.275246 17783 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1201 19:06:22.275270 17783 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1201 19:06:22.399021 17783 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1201 19:06:22.399054 17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1201 19:06:22.507609 17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1201 19:06:22.640154 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:22.770079 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:22.773505 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:23.127614 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:23.267225 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:23.269218 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:23.317894 17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.054130391s)
I1201 19:06:23.317961 17783 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.096376466s)
I1201 19:06:23.317987 17783 system_svc.go:56] duration metric: took 1.096474542s WaitForService to wait for kubelet
I1201 19:06:23.318005 17783 kubeadm.go:587] duration metric: took 10.955900933s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1201 19:06:23.318032 17783 node_conditions.go:102] verifying NodePressure condition ...
I1201 19:06:23.323700 17783 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1201 19:06:23.323747 17783 node_conditions.go:123] node cpu capacity is 2
I1201 19:06:23.323766 17783 node_conditions.go:105] duration metric: took 5.726408ms to run NodePressure ...
I1201 19:06:23.323783 17783 start.go:242] waiting for startup goroutines ...
I1201 19:06:23.659462 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:23.756955 17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.249301029s)
I1201 19:06:23.758001 17783 addons.go:495] Verifying addon gcp-auth=true in "addons-153147"
I1201 19:06:23.760360 17783 out.go:179] * Verifying gcp-auth addon...
I1201 19:06:23.762202 17783 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1201 19:06:23.855653 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:23.875077 17783 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1201 19:06:23.875098 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:23.876180 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:24.128007 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:24.268090 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:24.270239 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:24.271030 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:24.625807 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:24.767328 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:24.767434 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:24.770344 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:25.126481 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:25.269159 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:25.272474 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:25.278543 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:25.625121 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:25.766172 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:25.767211 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:25.767377 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:26.126146 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:26.269456 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:26.272054 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:26.272593 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:26.626423 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:26.765660 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:26.765821 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:26.768655 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:27.125149 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:27.265046 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:27.267029 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:27.267905 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:27.625265 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:27.764853 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:27.764959 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:27.766333 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:28.125652 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:28.264874 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:28.265390 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:28.266962 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:28.625270 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:28.764857 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:28.765122 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:28.767173 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:29.124108 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:29.268559 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:29.269667 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:29.270390 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:29.625267 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:29.769315 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:29.769680 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:29.769696 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:30.126329 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:30.265772 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:30.266144 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:30.266205 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:30.624860 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:30.855927 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:30.884528 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:30.885226 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:31.124516 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:31.265519 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:31.265708 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:31.265817 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:31.625402 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:31.766031 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:31.766475 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:31.767366 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:32.125701 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:32.266424 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:32.267197 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:32.268687 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:32.626935 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:32.766379 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:32.768322 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:32.769694 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:33.128045 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:33.270023 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:33.270180 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:33.272848 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:33.625759 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:33.769019 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:33.769062 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:33.769090 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:34.127723 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:34.268076 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:34.268332 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:34.268859 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:34.626159 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:34.772595 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:34.773151 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:34.773956 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:35.125728 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:35.266248 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:35.266881 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:35.267812 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:35.625752 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:35.766713 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:35.767414 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:35.767586 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:36.124966 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:36.266761 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:36.267694 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:36.268035 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:36.625176 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:36.765035 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:36.765459 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:36.766575 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:37.125608 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:37.265095 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:37.265794 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:37.266405 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:37.624614 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:37.766105 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:37.766300 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:37.767484 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:38.125006 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:38.266175 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:38.266409 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:38.266427 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:38.624416 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:38.765840 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:38.768056 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:38.770751 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:39.126371 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:39.271420 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:39.271569 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:39.275157 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:39.625407 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:39.765790 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:39.766001 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:39.767841 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:40.126918 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:40.266264 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:40.266328 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:40.267467 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:40.626096 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:40.791764 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:40.791836 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:40.792032 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:41.125497 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:41.266305 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:41.266405 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:41.266966 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:41.624060 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:41.765661 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:41.765678 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:41.766210 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:42.126162 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:42.265275 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:42.265404 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:42.265735 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:42.625024 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:42.769548 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:42.769651 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:42.770316 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:43.126782 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:43.269889 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:43.272280 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:43.273967 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:43.625817 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:43.768474 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:43.768482 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:43.768642 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:44.125123 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:44.264490 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1201 19:06:44.264691 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:44.266418 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:44.624440 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:44.766520 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:44.768371 17783 kapi.go:107] duration metric: took 24.00729829s to wait for kubernetes.io/minikube-addons=registry ...
I1201 19:06:44.768870 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:45.125850 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:45.265700 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:45.267319 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:45.624612 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:45.766080 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:45.766298 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:46.127960 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:46.267861 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:46.270510 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:46.625240 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:46.766441 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:46.768323 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:47.125005 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:47.268559 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:47.269500 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:47.626975 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:47.767717 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:47.768326 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:48.274557 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:48.279482 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:48.279921 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:48.629888 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:48.767022 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:48.767358 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:49.132150 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:49.268026 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:49.269544 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:49.625283 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:49.767382 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:49.772948 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:50.125187 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:50.265548 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:50.266092 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:50.625614 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:50.765545 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:50.765613 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:51.124871 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:51.270137 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:51.270750 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:51.625764 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:51.770291 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:51.770373 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:52.124463 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:52.267899 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:52.269976 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:52.791541 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:52.791557 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:52.792645 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:53.125889 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:53.271998 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:53.272248 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:53.625600 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:53.767871 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:53.769313 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:54.125897 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:54.266814 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:54.267722 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:54.625889 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:54.771752 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:54.771962 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:55.126696 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:55.266203 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:55.270026 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:55.626932 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:55.772904 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:55.773272 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:56.125863 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:56.266391 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:56.268136 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:56.624018 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:56.764332 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:56.765953 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:57.125288 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:57.265476 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:57.266115 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:57.625362 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:57.765944 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:57.766817 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:58.125927 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:58.271880 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:58.272046 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:58.624019 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:58.767318 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:58.768349 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:59.125460 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:59.268641 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:59.268773 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:06:59.625076 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:06:59.764821 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:06:59.766993 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:00.125296 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:00.271661 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:00.273943 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:00.625396 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:00.841047 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:00.844385 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:01.127821 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:01.270196 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:01.271879 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:01.625521 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:01.767332 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:01.767701 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:02.125375 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:02.265962 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:02.267253 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:02.624264 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:02.764344 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:02.765402 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:03.131934 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:03.265794 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:03.265934 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:03.625003 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:03.764548 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:03.765919 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:04.125994 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:04.267307 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:04.268374 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:04.627394 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:04.766413 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:04.768274 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:05.128044 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:05.267320 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:05.271083 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:05.627153 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:05.765620 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:05.769445 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:06.124181 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:06.268303 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:06.269371 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:06.624078 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:06.765438 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:06.765585 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:07.178687 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:07.266663 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:07.267527 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:07.625645 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:07.766453 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:07.766487 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:08.126110 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:08.265105 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:08.268597 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:08.624905 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:08.765727 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:08.765842 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:09.124519 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:09.266227 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:09.266424 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:09.626036 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:09.772525 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:09.773457 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:10.125143 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:10.268532 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:10.269078 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:10.626108 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:10.768359 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:10.768821 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:11.130043 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:11.267773 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:11.269453 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:11.624614 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:11.766673 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:11.766755 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:12.126271 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:12.268837 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:12.271184 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:12.627121 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:12.766451 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:12.767961 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:13.126286 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:13.283005 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:13.284677 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:13.634136 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:13.767955 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:13.768591 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:14.124616 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:14.273434 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:14.286458 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:14.624931 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:14.768454 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:14.773171 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:15.127864 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:15.271298 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:15.272623 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:15.629055 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:15.766372 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:15.766891 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:16.124875 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:16.267302 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:16.267515 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:16.663587 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:16.880223 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:16.880716 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:17.125160 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:17.267817 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:17.268002 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:17.625094 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:17.764973 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:17.765134 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:18.126790 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:18.266608 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:18.268993 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:18.624711 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:18.766200 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:18.766357 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:19.124774 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:19.269057 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:19.271310 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:19.624440 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:19.765928 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:19.766756 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:20.124177 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:20.267862 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:20.268020 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:20.628920 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:20.766109 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:20.766313 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:21.123968 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:21.266084 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:21.267926 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:21.624435 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:21.767005 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:21.769907 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:22.125479 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:22.271110 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:22.271322 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:22.625945 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:22.775820 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:22.775934 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:23.125571 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:23.266646 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:23.271341 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:23.626035 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:23.768383 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:23.769734 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:24.124938 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:24.270021 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:24.270493 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:24.626674 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:24.770225 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:24.775590 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:25.128017 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:25.268625 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:25.268869 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:25.627486 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:25.765654 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:25.766591 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:26.125696 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:26.268231 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:26.268539 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:26.624679 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:26.769966 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:26.770399 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:27.125743 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:27.267558 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:27.268167 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:27.626387 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:27.770184 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:27.770782 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:28.201751 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:28.269146 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:28.269673 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:28.626008 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:28.766925 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:28.766977 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:29.127906 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:29.266166 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:29.266323 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:29.629741 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:29.767695 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:29.768743 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:30.126385 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:30.269583 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:30.271306 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:30.624539 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:31.032243 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:31.039379 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:31.127042 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:31.267660 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:31.270057 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:31.625206 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:31.766183 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:31.767503 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:32.123975 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:32.266050 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:32.267505 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:32.624675 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:32.774179 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:32.780747 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:33.125183 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:33.266371 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:33.267319 17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1201 19:07:33.626199 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:33.769148 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:33.769542 17783 kapi.go:107] duration metric: took 1m13.008548934s to wait for app.kubernetes.io/name=ingress-nginx ...
I1201 19:07:34.126643 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:34.371178 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:34.625558 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:34.766288 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:35.123951 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1201 19:07:35.266771 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:35.625479 17783 kapi.go:107] duration metric: took 1m13.505083555s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1201 19:07:35.765866 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:36.266556 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:36.767271 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:37.268422 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:37.767030 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:38.267324 17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1201 19:07:38.766652 17783 kapi.go:107] duration metric: took 1m15.004447005s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1201 19:07:38.768555 17783 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-153147 cluster.
I1201 19:07:38.769920 17783 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1201 19:07:38.771306 17783 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1201 19:07:38.772741 17783 out.go:179] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, default-storageclass, registry-creds, storage-provisioner, inspektor-gadget, amd-gpu-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
I1201 19:07:38.774215 17783 addons.go:530] duration metric: took 1m26.412047147s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner default-storageclass registry-creds storage-provisioner inspektor-gadget amd-gpu-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
I1201 19:07:38.774258 17783 start.go:247] waiting for cluster config update ...
I1201 19:07:38.774283 17783 start.go:256] writing updated cluster config ...
I1201 19:07:38.774569 17783 ssh_runner.go:195] Run: rm -f paused
I1201 19:07:38.782048 17783 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1201 19:07:38.868080 17783 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qthgq" in "kube-system" namespace to be "Ready" or be gone ...
I1201 19:07:38.873747 17783 pod_ready.go:94] pod "coredns-66bc5c9577-qthgq" is "Ready"
I1201 19:07:38.873775 17783 pod_ready.go:86] duration metric: took 5.659434ms for pod "coredns-66bc5c9577-qthgq" in "kube-system" namespace to be "Ready" or be gone ...
I1201 19:07:38.876276 17783 pod_ready.go:83] waiting for pod "etcd-addons-153147" in "kube-system" namespace to be "Ready" or be gone ...
I1201 19:07:38.881293 17783 pod_ready.go:94] pod "etcd-addons-153147" is "Ready"
I1201 19:07:38.881309 17783 pod_ready.go:86] duration metric: took 5.015035ms for pod "etcd-addons-153147" in "kube-system" namespace to be "Ready" or be gone ...
I1201 19:07:38.883057 17783 pod_ready.go:83] waiting for pod "kube-apiserver-addons-153147" in "kube-system" namespace to be "Ready" or be gone ...
I1201 19:07:38.888335 17783 pod_ready.go:94] pod "kube-apiserver-addons-153147" is "Ready"
I1201 19:07:38.888361 17783 pod_ready.go:86] duration metric: took 5.288202ms for pod "kube-apiserver-addons-153147" in "kube-system" namespace to be "Ready" or be gone ...
I1201 19:07:38.890446 17783 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-153147" in "kube-system" namespace to be "Ready" or be gone ...
I1201 19:07:39.186871 17783 pod_ready.go:94] pod "kube-controller-manager-addons-153147" is "Ready"
I1201 19:07:39.186901 17783 pod_ready.go:86] duration metric: took 296.434781ms for pod "kube-controller-manager-addons-153147" in "kube-system" namespace to be "Ready" or be gone ...
I1201 19:07:39.387052 17783 pod_ready.go:83] waiting for pod "kube-proxy-9z5zn" in "kube-system" namespace to be "Ready" or be gone ...
I1201 19:07:39.787200 17783 pod_ready.go:94] pod "kube-proxy-9z5zn" is "Ready"
I1201 19:07:39.787239 17783 pod_ready.go:86] duration metric: took 400.160335ms for pod "kube-proxy-9z5zn" in "kube-system" namespace to be "Ready" or be gone ...
I1201 19:07:39.987769 17783 pod_ready.go:83] waiting for pod "kube-scheduler-addons-153147" in "kube-system" namespace to be "Ready" or be gone ...
I1201 19:07:40.387148 17783 pod_ready.go:94] pod "kube-scheduler-addons-153147" is "Ready"
I1201 19:07:40.387177 17783 pod_ready.go:86] duration metric: took 399.374204ms for pod "kube-scheduler-addons-153147" in "kube-system" namespace to be "Ready" or be gone ...
I1201 19:07:40.387196 17783 pod_ready.go:40] duration metric: took 1.605112351s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1201 19:07:40.434089 17783 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
I1201 19:07:40.436319 17783 out.go:179] * Done! kubectl is now configured to use "addons-153147" cluster and "default" namespace by default
==> CRI-O <==
Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.535286970Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a316f5eb-1e67-40f6-90c1-cc753769fdb6 name=/runtime.v1.RuntimeService/Version
Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.536314115Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ccb4140f-f1c2-4d87-b7c2-da64d4c5b5be name=/runtime.v1.ImageService/ImageFsInfo
Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.537650730Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764616249537622846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585495,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ccb4140f-f1c2-4d87-b7c2-da64d4c5b5be name=/runtime.v1.ImageService/ImageFsInfo
Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.538640248Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d0cf697-c532-4728-ae81-03cdfd3e140c name=/runtime.v1.RuntimeService/ListContainers
Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.538854385Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d0cf697-c532-4728-ae81-03cdfd3e140c name=/runtime.v1.RuntimeService/ListContainers
Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.539469818Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dff7799c97435cd4d635804cb2ce271bd40ad1cd3f18edcf6046c2f1b2b63ec1,PodSandboxId:a3ecafa2ef89605a21e0cfb3a2a3663f1f11e20978dc92cf686899587f802c8c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1764616107675426138,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4486d923-4013-47f9-8cd9-a81f1ddebd66,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e556ec8c41a707014e28ca7432e8a3cea76f365d0a642f3f5f529658529e05,PodSandboxId:509f2f394e11771149a60a24722adfac16e8b8b48f811577c51078edb908eeec,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1764616064872566136,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2c7cc93-0f51-443c-a999-402fe4c9076b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d87c1009d204a1de506f5fd769f03040322ef0fc2612dce071e3cc43d1802bca,PodSandboxId:9263258d416914e7b977ee63ebbedfdbd942b69997e20d6b25cb37aa04480c96,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1764616052731591227,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-j5gk6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d71a8554-45d4-4d96-a11a-f3dd97666c64,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:66454cfa07aa182773e252bc453bf7eacd3db562d79ee157e4d4aba4ce93b9f6,PodSandboxId:b268ef6ce3eff24e23b760a6b43e42617fba7f4706069b9061842e2f8649b96f,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Sta
te:CONTAINER_EXITED,CreatedAt:1764616052586179570,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-slpzw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4db4569d-df65-42e4-808a-cfe898d653c2,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7edac07552432ac51d44db6e90fc88c44d1bb2a846f4076c306e80ef691df6,PodSandboxId:799f078c1ac8798e0eafe6480c97ab9e59e40f23fad74397abcdef6174958f67,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970fa
a6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764616033603465723,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8l42q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fcd4f82d-09d7-45c1-b696-ba124b55f6da,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addccc37646a554def25bbd5bda9c133577ddbc7e024872ea0c4a7ec53fe7c9b,PodSandboxId:09440f74f7ddb2505ddd8ca93fc4ad0ea25b4c2e6f25ac588d87093d9af39a25,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1764616013753578311,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ada2334d-7448-402c-ba30-9ea15e6fe684,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eddcd05f5f9c16544cce2ca6e13a573a7d06cc799e4df0460b8b35221b96bc2d,PodSandboxId:9792fb4e64dde1847bba01cfe38915107e19398cb453e24067b8569a02047ade,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38354
98cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1764615990010406867,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-nh9fh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ed7c27-42bf-429e-a659-5cab61a37789,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd6c211af9d77e9446c6d73e364921bbba94647263e4c21fcabc93853307404,PodSandboxId:f125b2534c5c1b5dbaa103887d8ba86e851b05f21020e6d9e2496059cef74245,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d
628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764615980107437043,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 366028de-640e-4307-982b-f015bfda82d0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83da94bcf0ecf178c1849acd3c0ccd1cf809df4397be23b3f50fed5afaf49d3b,PodSandboxId:3fec75154cb91d729c1a32f602c79580a0369d103037645a47b19ec46c1d2557,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a916
7fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764615973035605383,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qthgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c971b48-0414-4686-9897-a70b10f42b2f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cdfe08daf700cc46cbce31a511280cd3e8431e0915795fa962406fa7bfb703f,PodSandboxId:515488c0643710af8511b8d09091d23c04bc1827e696ed5a6838803562887c7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1764615972026087662,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9z5zn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05f6dd4e-50d1-437b-b0f6-8f7f30ef91f8,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e095ec070fd59c1502545f9404290e7b2310014f1838dc85f42e2ec9d71520,PodSandboxId:2ceab01349c6401fb618bfc795ee60b5eded868fd2704113b6846229b32726bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764615960365913364,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-153147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53189a71631f236402671f457423c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b03450bd940018d595ff4bdb217255616ba7406499522b6f958ac6c5deaccb9c,PodSandboxId:4acefd3804f0268c4f71d992b8f6e2098b3252f328722cfa829cca14b771cdbe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1764615960378686318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-153147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9be8929a3a21c147a11b04c6ddd818cb,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\
":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eb3f768ff899570f82f5fd37af6a8386f67ee3eef54aedc3896727a240e84c9,PodSandboxId:3907faecd946b05b5f7a93b7b53539328ec2e14e3e10aab05cf1911234ec06e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1764615960342245559,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-153147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f354084d95d2a2a9d7ac1e0e2f17a965,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.ku
bernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15bfca845ba7bee915e083e47d9b379fd72cfc886332e91ec435f41a7d475400,PodSandboxId:9bf75387a3b64ffc5422f8eaf5f650528df4434fe16cd8d6a276d4fbfe1e2ffe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1764615960310184581,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-153147,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 230b3557e2dadce65ee48646e716bd4c,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d0cf697-c532-4728-ae81-03cdfd3e140c name=/runtime.v1.RuntimeService/ListContainers
Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.573789000Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd4a77da-981b-475c-8152-da9586d8ecd4 name=/runtime.v1.RuntimeService/Version
Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.573892062Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd4a77da-981b-475c-8152-da9586d8ecd4 name=/runtime.v1.RuntimeService/Version
Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.575458065Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=838edf4a-8707-4bcb-9ece-e5649c925b4d name=/runtime.v1.ImageService/ImageFsInfo
Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.576856636Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764616249576831567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585495,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=838edf4a-8707-4bcb-9ece-e5649c925b4d name=/runtime.v1.ImageService/ImageFsInfo
Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.577757168Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94f4c5c6-a4b1-474c-b553-97a560e44a0e name=/runtime.v1.RuntimeService/ListContainers
Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.577894028Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94f4c5c6-a4b1-474c-b553-97a560e44a0e name=/runtime.v1.RuntimeService/ListContainers
Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.578858968Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dff7799c97435cd4d635804cb2ce271bd40ad1cd3f18edcf6046c2f1b2b63ec1,PodSandboxId:a3ecafa2ef89605a21e0cfb3a2a3663f1f11e20978dc92cf686899587f802c8c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1764616107675426138,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4486d923-4013-47f9-8cd9-a81f1ddebd66,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e556ec8c41a707014e28ca7432e8a3cea76f365d0a642f3f5f529658529e05,PodSandboxId:509f2f394e11771149a60a24722adfac16e8b8b48f811577c51078edb908eeec,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1764616064872566136,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2c7cc93-0f51-443c-a999-402fe4c9076b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d87c1009d204a1de506f5fd769f03040322ef0fc2612dce071e3cc43d1802bca,PodSandboxId:9263258d416914e7b977ee63ebbedfdbd942b69997e20d6b25cb37aa04480c96,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1764616052731591227,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-j5gk6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d71a8554-45d4-4d96-a11a-f3dd97666c64,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:66454cfa07aa182773e252bc453bf7eacd3db562d79ee157e4d4aba4ce93b9f6,PodSandboxId:b268ef6ce3eff24e23b760a6b43e42617fba7f4706069b9061842e2f8649b96f,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Sta
te:CONTAINER_EXITED,CreatedAt:1764616052586179570,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-slpzw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4db4569d-df65-42e4-808a-cfe898d653c2,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7edac07552432ac51d44db6e90fc88c44d1bb2a846f4076c306e80ef691df6,PodSandboxId:799f078c1ac8798e0eafe6480c97ab9e59e40f23fad74397abcdef6174958f67,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970fa
a6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764616033603465723,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8l42q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fcd4f82d-09d7-45c1-b696-ba124b55f6da,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addccc37646a554def25bbd5bda9c133577ddbc7e024872ea0c4a7ec53fe7c9b,PodSandboxId:09440f74f7ddb2505ddd8ca93fc4ad0ea25b4c2e6f25ac588d87093d9af39a25,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1764616013753578311,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ada2334d-7448-402c-ba30-9ea15e6fe684,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eddcd05f5f9c16544cce2ca6e13a573a7d06cc799e4df0460b8b35221b96bc2d,PodSandboxId:9792fb4e64dde1847bba01cfe38915107e19398cb453e24067b8569a02047ade,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38354
98cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1764615990010406867,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-nh9fh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ed7c27-42bf-429e-a659-5cab61a37789,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd6c211af9d77e9446c6d73e364921bbba94647263e4c21fcabc93853307404,PodSandboxId:f125b2534c5c1b5dbaa103887d8ba86e851b05f21020e6d9e2496059cef74245,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d
628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764615980107437043,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 366028de-640e-4307-982b-f015bfda82d0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83da94bcf0ecf178c1849acd3c0ccd1cf809df4397be23b3f50fed5afaf49d3b,PodSandboxId:3fec75154cb91d729c1a32f602c79580a0369d103037645a47b19ec46c1d2557,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a916
7fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764615973035605383,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qthgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c971b48-0414-4686-9897-a70b10f42b2f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cdfe08daf700cc46cbce31a511280cd3e8431e0915795fa962406fa7bfb703f,PodSandboxId:515488c0643710af8511b8d09091d23c04bc1827e696ed5a6838803562887c7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1764615972026087662,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9z5zn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05f6dd4e-50d1-437b-b0f6-8f7f30ef91f8,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e095ec070fd59c1502545f9404290e7b2310014f1838dc85f42e2ec9d71520,PodSandboxId:2ceab01349c6401fb618bfc795ee60b5eded868fd2704113b6846229b32726bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764615960365913364,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-153147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53189a71631f236402671f457423c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b03450bd940018d595ff4bdb217255616ba7406499522b6f958ac6c5deaccb9c,PodSandboxId:4acefd3804f0268c4f71d992b8f6e2098b3252f328722cfa829cca14b771cdbe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1764615960378686318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-153147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9be8929a3a21c147a11b04c6ddd818cb,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\
":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eb3f768ff899570f82f5fd37af6a8386f67ee3eef54aedc3896727a240e84c9,PodSandboxId:3907faecd946b05b5f7a93b7b53539328ec2e14e3e10aab05cf1911234ec06e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1764615960342245559,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-153147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f354084d95d2a2a9d7ac1e0e2f17a965,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.ku
bernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15bfca845ba7bee915e083e47d9b379fd72cfc886332e91ec435f41a7d475400,PodSandboxId:9bf75387a3b64ffc5422f8eaf5f650528df4434fe16cd8d6a276d4fbfe1e2ffe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1764615960310184581,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-153147,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 230b3557e2dadce65ee48646e716bd4c,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94f4c5c6-a4b1-474c-b553-97a560e44a0e name=/runtime.v1.RuntimeService/ListContainers
Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.595114475Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=2c704fc8-da7f-4e80-b2f5-efa87158939e name=/runtime.v1.RuntimeService/ListPodSandbox
Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.595921588Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:64a9cda6d80a5980de523d1b693a48aec3f4ea54fc83c74bb3f714c1952faf6e,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-bp2ws,Uid:3bf05b82-6c0e-4593-a9ca-a5ed936510a2,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1764616248749436032,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-bp2ws,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3bf05b82-6c0e-4593-a9ca-a5ed936510a2,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-01T19:10:48.427671073Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a3ecafa2ef89605a21e0cfb3a2a3663f1f11e20978dc92cf686899587f802c8c,Metadata:&PodSandboxMetadata{Name:nginx,Uid:4486d923-4013-47f9-8cd9-a81f1ddebd66,Namespace:default,Attempt:0,},St
ate:SANDBOX_READY,CreatedAt:1764616100409884707,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4486d923-4013-47f9-8cd9-a81f1ddebd66,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-01T19:08:19.701805241Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:509f2f394e11771149a60a24722adfac16e8b8b48f811577c51078edb908eeec,Metadata:&PodSandboxMetadata{Name:busybox,Uid:b2c7cc93-0f51-443c-a999-402fe4c9076b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1764616061352886089,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2c7cc93-0f51-443c-a999-402fe4c9076b,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-01T19:07:41.032534617Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9263258d416914e7b977e
e63ebbedfdbd942b69997e20d6b25cb37aa04480c96,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-6c8bf45fb-j5gk6,Uid:d71a8554-45d4-4d96-a11a-f3dd97666c64,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1764616044827315531,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-j5gk6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d71a8554-45d4-4d96-a11a-f3dd97666c64,pod-template-hash: 6c8bf45fb,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-01T19:06:20.603137804Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:799f078c1ac8798e0eafe6480c97ab9e59e40f23fad74397abcdef6174958f67,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-8l42q,Uid:fcd4f82d-09d7-45c1-b696-ba124b55f6da,Namespace:ingress-nginx,Attempt:0,},Stat
e:SANDBOX_NOTREADY,CreatedAt:1764615982062870200,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: ec966a3f-cd0e-4031-bdac-14f082abfed5,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: ec966a3f-cd0e-4031-bdac-14f082abfed5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-8l42q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fcd4f82d-09d7-45c1-b696-ba124b55f6da,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-01T19:06:20.781669391Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b268ef6ce3eff24e23b760a6b43e42617fba7f4706069b9061842e2f8649b96f,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-slpzw,Uid:4db4569d-df65-42e4-808a-cfe898d653c2,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,Crea
tedAt:1764615981367424020,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: 6225b011-25e7-4162-9938-a08f4e103cc7,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: 6225b011-25e7-4162-9938-a08f4e103cc7,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-slpzw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4db4569d-df65-42e4-808a-cfe898d653c2,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-01T19:06:20.838672171Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f125b2534c5c1b5dbaa103887d8ba86e851b05f21020e6d9e2496059cef74245,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:366028de-640e-4307-982b-f015bfda82d0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1764615978652607450,Labels:map[string]str
ing{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 366028de-640e-4307-982b-f015bfda82d0,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/co
nfig.seen: 2025-12-01T19:06:18.304726006Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:09440f74f7ddb2505ddd8ca93fc4ad0ea25b4c2e6f25ac588d87093d9af39a25,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:ada2334d-7448-402c-ba30-9ea15e6fe684,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1764615978398137737,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ada2334d-7448-402c-ba30-9ea15e6fe684,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"5
3\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"hostPort\":53,\"protocol\":\"UDP\"}],\"volumeMounts\":[{\"mountPath\":\"/config\",\"name\":\"minikube-ingress-dns-config-volume\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\",\"volumes\":[{\"configMap\":{\"name\":\"minikube-ingress-dns\"},\"name\":\"minikube-ingress-dns-config-volume\"}]}}\n,kubernetes.io/config.seen: 2025-12-01T19:06:18.053043209Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9792fb4e64dde1847bba01cfe38915107e19398cb453e24067b8569a02047ade,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-nh9fh,Uid:19ed7c27-42bf-429e-a659-5cab61a37789,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:176461597573627545
7,Labels:map[string]string{controller-revision-hash: 7f87d6fd8d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-nh9fh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ed7c27-42bf-429e-a659-5cab61a37789,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-01T19:06:15.398463902Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3fec75154cb91d729c1a32f602c79580a0369d103037645a47b19ec46c1d2557,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-qthgq,Uid:1c971b48-0414-4686-9897-a70b10f42b2f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1764615972203885620,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-qthgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c971b48-0414-4686-9897-a70b10f42b2f,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[strin
g]string{kubernetes.io/config.seen: 2025-12-01T19:06:11.873300050Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:515488c0643710af8511b8d09091d23c04bc1827e696ed5a6838803562887c7d,Metadata:&PodSandboxMetadata{Name:kube-proxy-9z5zn,Uid:05f6dd4e-50d1-437b-b0f6-8f7f30ef91f8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1764615971907382537,Labels:map[string]string{controller-revision-hash: 66d5f8d6f6,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9z5zn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05f6dd4e-50d1-437b-b0f6-8f7f30ef91f8,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-01T19:06:11.578255976Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4acefd3804f0268c4f71d992b8f6e2098b3252f328722cfa829cca14b771cdbe,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-153147,Uid:9be8929a3a21c147a11b04c6ddd818cb,Namespace:kube-system,Attempt:0,},State:
SANDBOX_READY,CreatedAt:1764615960126392010,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-153147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9be8929a3a21c147a11b04c6ddd818cb,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9be8929a3a21c147a11b04c6ddd818cb,kubernetes.io/config.seen: 2025-12-01T19:05:59.603211708Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3907faecd946b05b5f7a93b7b53539328ec2e14e3e10aab05cf1911234ec06e9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-153147,Uid:f354084d95d2a2a9d7ac1e0e2f17a965,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1764615960123332507,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-153147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f354084d95d2a2a9d7ac1e0e2f17a965,tier: control-plane,},
Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.9:8443,kubernetes.io/config.hash: f354084d95d2a2a9d7ac1e0e2f17a965,kubernetes.io/config.seen: 2025-12-01T19:05:59.603209508Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9bf75387a3b64ffc5422f8eaf5f650528df4434fe16cd8d6a276d4fbfe1e2ffe,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-153147,Uid:230b3557e2dadce65ee48646e716bd4c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1764615960120705229,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-153147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230b3557e2dadce65ee48646e716bd4c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 230b3557e2dadce65ee48646e716bd4c,kubernetes.io/config.seen: 2025-12-01T19:05:59.603210809Z,kubernetes.io/config.source: file,},Runtime
Handler:,},&PodSandbox{Id:2ceab01349c6401fb618bfc795ee60b5eded868fd2704113b6846229b32726bf,Metadata:&PodSandboxMetadata{Name:etcd-addons-153147,Uid:a53189a71631f236402671f457423c6d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1764615960120328417,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-153147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53189a71631f236402671f457423c6d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.9:2379,kubernetes.io/config.hash: a53189a71631f236402671f457423c6d,kubernetes.io/config.seen: 2025-12-01T19:05:59.603204988Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=2c704fc8-da7f-4e80-b2f5-efa87158939e name=/runtime.v1.RuntimeService/ListPodSandbox
Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.597776607Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ee24391-7879-4239-8662-a12478b047ef name=/runtime.v1.RuntimeService/ListContainers
Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.597857906Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ee24391-7879-4239-8662-a12478b047ef name=/runtime.v1.RuntimeService/ListContainers
Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.598286714Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dff7799c97435cd4d635804cb2ce271bd40ad1cd3f18edcf6046c2f1b2b63ec1,PodSandboxId:a3ecafa2ef89605a21e0cfb3a2a3663f1f11e20978dc92cf686899587f802c8c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1764616107675426138,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4486d923-4013-47f9-8cd9-a81f1ddebd66,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e556ec8c41a707014e28ca7432e8a3cea76f365d0a642f3f5f529658529e05,PodSandboxId:509f2f394e11771149a60a24722adfac16e8b8b48f811577c51078edb908eeec,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1764616064872566136,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2c7cc93-0f51-443c-a999-402fe4c9076b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d87c1009d204a1de506f5fd769f03040322ef0fc2612dce071e3cc43d1802bca,PodSandboxId:9263258d416914e7b977ee63ebbedfdbd942b69997e20d6b25cb37aa04480c96,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1764616052731591227,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-j5gk6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d71a8554-45d4-4d96-a11a-f3dd97666c64,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:66454cfa07aa182773e252bc453bf7eacd3db562d79ee157e4d4aba4ce93b9f6,PodSandboxId:b268ef6ce3eff24e23b760a6b43e42617fba7f4706069b9061842e2f8649b96f,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Sta
te:CONTAINER_EXITED,CreatedAt:1764616052586179570,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-slpzw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4db4569d-df65-42e4-808a-cfe898d653c2,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7edac07552432ac51d44db6e90fc88c44d1bb2a846f4076c306e80ef691df6,PodSandboxId:799f078c1ac8798e0eafe6480c97ab9e59e40f23fad74397abcdef6174958f67,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970fa
a6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764616033603465723,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8l42q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fcd4f82d-09d7-45c1-b696-ba124b55f6da,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addccc37646a554def25bbd5bda9c133577ddbc7e024872ea0c4a7ec53fe7c9b,PodSandboxId:09440f74f7ddb2505ddd8ca93fc4ad0ea25b4c2e6f25ac588d87093d9af39a25,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1764616013753578311,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ada2334d-7448-402c-ba30-9ea15e6fe684,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eddcd05f5f9c16544cce2ca6e13a573a7d06cc799e4df0460b8b35221b96bc2d,PodSandboxId:9792fb4e64dde1847bba01cfe38915107e19398cb453e24067b8569a02047ade,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38354
98cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1764615990010406867,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-nh9fh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ed7c27-42bf-429e-a659-5cab61a37789,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd6c211af9d77e9446c6d73e364921bbba94647263e4c21fcabc93853307404,PodSandboxId:f125b2534c5c1b5dbaa103887d8ba86e851b05f21020e6d9e2496059cef74245,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d
628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764615980107437043,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 366028de-640e-4307-982b-f015bfda82d0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83da94bcf0ecf178c1849acd3c0ccd1cf809df4397be23b3f50fed5afaf49d3b,PodSandboxId:3fec75154cb91d729c1a32f602c79580a0369d103037645a47b19ec46c1d2557,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a916
7fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764615973035605383,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qthgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c971b48-0414-4686-9897-a70b10f42b2f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cdfe08daf700cc46cbce31a511280cd3e8431e0915795fa962406fa7bfb703f,PodSandboxId:515488c0643710af8511b8d09091d23c04bc1827e696ed5a6838803562887c7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1764615972026087662,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9z5zn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05f6dd4e-50d1-437b-b0f6-8f7f30ef91f8,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e095ec070fd59c1502545f9404290e7b2310014f1838dc85f42e2ec9d71520,PodSandboxId:2ceab01349c6401fb618bfc795ee60b5eded868fd2704113b6846229b32726bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764615960365913364,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-153147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53189a71631f236402671f457423c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b03450bd940018d595ff4bdb217255616ba7406499522b6f958ac6c5deaccb9c,PodSandboxId:4acefd3804f0268c4f71d992b8f6e2098b3252f328722cfa829cca14b771cdbe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1764615960378686318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-153147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9be8929a3a21c147a11b04c6ddd818cb,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\
":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eb3f768ff899570f82f5fd37af6a8386f67ee3eef54aedc3896727a240e84c9,PodSandboxId:3907faecd946b05b5f7a93b7b53539328ec2e14e3e10aab05cf1911234ec06e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1764615960342245559,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-153147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f354084d95d2a2a9d7ac1e0e2f17a965,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.ku
bernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15bfca845ba7bee915e083e47d9b379fd72cfc886332e91ec435f41a7d475400,PodSandboxId:9bf75387a3b64ffc5422f8eaf5f650528df4434fe16cd8d6a276d4fbfe1e2ffe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1764615960310184581,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-153147,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 230b3557e2dadce65ee48646e716bd4c,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ee24391-7879-4239-8662-a12478b047ef name=/runtime.v1.RuntimeService/ListContainers
Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.600382424Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: 3bf05b82-6c0e-4593-a9ca-a5ed936510a2,},},}" file="otel-collector/interceptors.go:62" id=a0c2d304-0e90-4fc9-be1d-a395ac5ecbd4 name=/runtime.v1.RuntimeService/ListPodSandbox
Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.601650463Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:64a9cda6d80a5980de523d1b693a48aec3f4ea54fc83c74bb3f714c1952faf6e,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-bp2ws,Uid:3bf05b82-6c0e-4593-a9ca-a5ed936510a2,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1764616248749436032,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-bp2ws,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3bf05b82-6c0e-4593-a9ca-a5ed936510a2,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-01T19:10:48.427671073Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=a0c2d304-0e90-4fc9-be1d-a395ac5ecbd4 name=/runtime.v1.RuntimeService/ListPodSandbox
Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.603271387Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:64a9cda6d80a5980de523d1b693a48aec3f4ea54fc83c74bb3f714c1952faf6e,Verbose:false,}" file="otel-collector/interceptors.go:62" id=fc03c35b-dfe7-4b53-9adc-73e80ab85c69 name=/runtime.v1.RuntimeService/PodSandboxStatus
Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.603405769Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:64a9cda6d80a5980de523d1b693a48aec3f4ea54fc83c74bb3f714c1952faf6e,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-bp2ws,Uid:3bf05b82-6c0e-4593-a9ca-a5ed936510a2,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1764616248749436032,Network:&PodSandboxNetworkStatus{Ip:10.244.0.33,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:&UserNamespace{Mode:NODE,Uids:[]*IDMapping{},Gids:[]*IDMapping{},},},},},Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-bp2ws,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3bf05b82-6c0e-4593-a9ca-a5ed936510a2,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen:
2025-12-01T19:10:48.427671073Z,kubernetes.io/config.source: api,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=fc03c35b-dfe7-4b53-9adc-73e80ab85c69 name=/runtime.v1.RuntimeService/PodSandboxStatus
Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.604803078Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 3bf05b82-6c0e-4593-a9ca-a5ed936510a2,},},}" file="otel-collector/interceptors.go:62" id=1da3b1b0-2331-42ac-8fc5-6adc708be008 name=/runtime.v1.RuntimeService/ListContainers
Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.604858514Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1da3b1b0-2331-42ac-8fc5-6adc708be008 name=/runtime.v1.RuntimeService/ListContainers
Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.604921066Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1da3b1b0-2331-42ac-8fc5-6adc708be008 name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
dff7799c97435 docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 2 minutes ago Running nginx 0 a3ecafa2ef896 nginx default
f2e556ec8c41a gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 3 minutes ago Running busybox 0 509f2f394e117 busybox default
d87c1009d204a registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27 3 minutes ago Running controller 0 9263258d41691 ingress-nginx-controller-6c8bf45fb-j5gk6 ingress-nginx
66454cfa07aa1 884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45 3 minutes ago Exited patch 2 b268ef6ce3eff ingress-nginx-admission-patch-slpzw ingress-nginx
2c7edac075524 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f 3 minutes ago Exited create 0 799f078c1ac87 ingress-nginx-admission-create-8l42q ingress-nginx
addccc37646a5 docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 3 minutes ago Running minikube-ingress-dns 0 09440f74f7ddb kube-ingress-dns-minikube kube-system
eddcd05f5f9c1 docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 9792fb4e64dde amd-gpu-device-plugin-nh9fh kube-system
0bd6c211af9d7 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 f125b2534c5c1 storage-provisioner kube-system
83da94bcf0ecf 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969 4 minutes ago Running coredns 0 3fec75154cb91 coredns-66bc5c9577-qthgq kube-system
8cdfe08daf700 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45 4 minutes ago Running kube-proxy 0 515488c064371 kube-proxy-9z5zn kube-system
b03450bd94001 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952 4 minutes ago Running kube-scheduler 0 4acefd3804f02 kube-scheduler-addons-153147 kube-system
49e095ec070fd a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1 4 minutes ago Running etcd 0 2ceab01349c64 etcd-addons-153147 kube-system
0eb3f768ff899 a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85 4 minutes ago Running kube-apiserver 0 3907faecd946b kube-apiserver-addons-153147 kube-system
15bfca845ba7b 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8 4 minutes ago Running kube-controller-manager 0 9bf75387a3b64 kube-controller-manager-addons-153147 kube-system
==> coredns [83da94bcf0ecf178c1849acd3c0ccd1cf809df4397be23b3f50fed5afaf49d3b] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
[INFO] Reloading complete
[INFO] 127.0.0.1:39060 - 34052 "HINFO IN 1330166463351145051.8371010972066214296. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025057212s
[INFO] 10.244.0.23:37019 - 44053 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00045101s
[INFO] 10.244.0.23:35276 - 12720 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00135739s
[INFO] 10.244.0.23:60577 - 18978 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000113032s
[INFO] 10.244.0.23:46315 - 41230 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000872525s
[INFO] 10.244.0.23:49657 - 35489 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000414109s
[INFO] 10.244.0.23:51289 - 11755 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000540474s
[INFO] 10.244.0.23:38509 - 16973 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.00140446s
[INFO] 10.244.0.23:50173 - 52410 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001122543s
[INFO] 10.244.0.26:48423 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000604409s
[INFO] 10.244.0.26:60129 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00023625s
==> describe nodes <==
Name: addons-153147
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-153147
kubernetes.io/os=linux
minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
minikube.k8s.io/name=addons-153147
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_01T19_06_07_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-153147
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 01 Dec 2025 19:06:03 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-153147
AcquireTime: <unset>
RenewTime: Mon, 01 Dec 2025 19:10:41 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 01 Dec 2025 19:09:10 +0000 Mon, 01 Dec 2025 19:06:01 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 01 Dec 2025 19:09:10 +0000 Mon, 01 Dec 2025 19:06:01 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 01 Dec 2025 19:09:10 +0000 Mon, 01 Dec 2025 19:06:01 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 01 Dec 2025 19:09:10 +0000 Mon, 01 Dec 2025 19:06:07 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.9
Hostname: addons-153147
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
System Info:
Machine ID: b210d02d07be413197b5bb937549f8ab
System UUID: b210d02d-07be-4131-97b5-bb937549f8ab
Boot ID: 21e491b2-8bd2-497b-9210-febc088453e1
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02.8
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.34.2
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (13 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m8s
default hello-world-app-5d498dc89-bp2ws 0 (0%) 0 (0%) 0 (0%) 0 (0%) 1s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m30s
ingress-nginx ingress-nginx-controller-6c8bf45fb-j5gk6 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m29s
kube-system amd-gpu-device-plugin-nh9fh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m34s
kube-system coredns-66bc5c9577-qthgq 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m38s
kube-system etcd-addons-153147 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m44s
kube-system kube-apiserver-addons-153147 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m43s
kube-system kube-controller-manager-addons-153147 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m43s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m31s
kube-system kube-proxy-9z5zn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m38s
kube-system kube-scheduler-addons-153147 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m43s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m31s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m37s kube-proxy
Normal Starting 4m43s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m43s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m43s kubelet Node addons-153147 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m43s kubelet Node addons-153147 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m43s kubelet Node addons-153147 status is now: NodeHasSufficientPID
Normal NodeReady 4m42s kubelet Node addons-153147 status is now: NodeReady
Normal RegisteredNode 4m39s node-controller Node addons-153147 event: Registered Node addons-153147 in Controller
==> dmesg <==
[ +0.115056] kauditd_printk_skb: 321 callbacks suppressed
[ +1.527686] kauditd_printk_skb: 353 callbacks suppressed
[ +8.661666] kauditd_printk_skb: 20 callbacks suppressed
[ +7.857554] kauditd_printk_skb: 32 callbacks suppressed
[ +9.945283] kauditd_printk_skb: 5 callbacks suppressed
[Dec 1 19:07] kauditd_printk_skb: 32 callbacks suppressed
[ +5.426219] kauditd_printk_skb: 86 callbacks suppressed
[ +6.138416] kauditd_printk_skb: 56 callbacks suppressed
[ +3.567492] kauditd_printk_skb: 86 callbacks suppressed
[ +0.000042] kauditd_printk_skb: 126 callbacks suppressed
[ +0.000033] kauditd_printk_skb: 44 callbacks suppressed
[ +1.051192] kauditd_printk_skb: 102 callbacks suppressed
[ +0.000099] kauditd_printk_skb: 13 callbacks suppressed
[ +5.086961] kauditd_printk_skb: 47 callbacks suppressed
[Dec 1 19:08] kauditd_printk_skb: 22 callbacks suppressed
[ +1.633881] kauditd_printk_skb: 149 callbacks suppressed
[ +0.853585] kauditd_printk_skb: 153 callbacks suppressed
[ +3.861415] kauditd_printk_skb: 125 callbacks suppressed
[ +1.862602] kauditd_printk_skb: 114 callbacks suppressed
[ +5.208541] kauditd_printk_skb: 46 callbacks suppressed
[ +8.218312] kauditd_printk_skb: 30 callbacks suppressed
[ +7.664871] kauditd_printk_skb: 10 callbacks suppressed
[Dec 1 19:09] kauditd_printk_skb: 46 callbacks suppressed
[ +6.832307] kauditd_printk_skb: 5 callbacks suppressed
[Dec 1 19:10] kauditd_printk_skb: 127 callbacks suppressed
==> etcd [49e095ec070fd59c1502545f9404290e7b2310014f1838dc85f42e2ec9d71520] <==
{"level":"info","ts":"2025-12-01T19:07:16.867514Z","caller":"traceutil/trace.go:172","msg":"trace[1667234423] transaction","detail":"{read_only:false; response_revision:1045; number_of_response:1; }","duration":"216.478317ms","start":"2025-12-01T19:07:16.651030Z","end":"2025-12-01T19:07:16.867508Z","steps":["trace[1667234423] 'process raft request' (duration: 211.118928ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-01T19:07:16.867629Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.109871ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-01T19:07:16.867647Z","caller":"traceutil/trace.go:172","msg":"trace[1163621068] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1045; }","duration":"106.129025ms","start":"2025-12-01T19:07:16.761513Z","end":"2025-12-01T19:07:16.867642Z","steps":["trace[1163621068] 'agreement among raft nodes before linearized reading' (duration: 106.068856ms)"],"step_count":1}
{"level":"info","ts":"2025-12-01T19:07:28.190837Z","caller":"traceutil/trace.go:172","msg":"trace[580058826] transaction","detail":"{read_only:false; response_revision:1120; number_of_response:1; }","duration":"168.378845ms","start":"2025-12-01T19:07:28.021910Z","end":"2025-12-01T19:07:28.190289Z","steps":["trace[580058826] 'process raft request' (duration: 162.727533ms)"],"step_count":1}
{"level":"info","ts":"2025-12-01T19:07:29.546132Z","caller":"traceutil/trace.go:172","msg":"trace[1898381605] linearizableReadLoop","detail":"{readStateIndex:1156; appliedIndex:1156; }","duration":"101.329381ms","start":"2025-12-01T19:07:29.444785Z","end":"2025-12-01T19:07:29.546114Z","steps":["trace[1898381605] 'read index received' (duration: 101.325052ms)","trace[1898381605] 'applied index is now lower than readState.Index' (duration: 3.782µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-01T19:07:29.546289Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.488993ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-01T19:07:29.546310Z","caller":"traceutil/trace.go:172","msg":"trace[300433651] range","detail":"{range_begin:/registry/secrets; range_end:; response_count:0; response_revision:1123; }","duration":"101.523736ms","start":"2025-12-01T19:07:29.444780Z","end":"2025-12-01T19:07:29.546304Z","steps":["trace[300433651] 'agreement among raft nodes before linearized reading' (duration: 101.433073ms)"],"step_count":1}
{"level":"info","ts":"2025-12-01T19:07:29.547269Z","caller":"traceutil/trace.go:172","msg":"trace[1752219314] transaction","detail":"{read_only:false; response_revision:1124; number_of_response:1; }","duration":"107.896601ms","start":"2025-12-01T19:07:29.439363Z","end":"2025-12-01T19:07:29.547259Z","steps":["trace[1752219314] 'process raft request' (duration: 107.067035ms)"],"step_count":1}
{"level":"info","ts":"2025-12-01T19:07:31.026447Z","caller":"traceutil/trace.go:172","msg":"trace[1390253838] linearizableReadLoop","detail":"{readStateIndex:1158; appliedIndex:1158; }","duration":"266.222026ms","start":"2025-12-01T19:07:30.760209Z","end":"2025-12-01T19:07:31.026431Z","steps":["trace[1390253838] 'read index received' (duration: 266.216954ms)","trace[1390253838] 'applied index is now lower than readState.Index' (duration: 4.468µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-01T19:07:31.026545Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"266.320452ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-01T19:07:31.026562Z","caller":"traceutil/trace.go:172","msg":"trace[1367812758] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1125; }","duration":"266.351144ms","start":"2025-12-01T19:07:30.760206Z","end":"2025-12-01T19:07:31.026557Z","steps":["trace[1367812758] 'agreement among raft nodes before linearized reading' (duration: 266.295058ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-01T19:07:31.031237Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"268.34602ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-01T19:07:31.031995Z","caller":"traceutil/trace.go:172","msg":"trace[360982683] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1125; }","duration":"269.106518ms","start":"2025-12-01T19:07:30.762872Z","end":"2025-12-01T19:07:31.031978Z","steps":["trace[360982683] 'agreement among raft nodes before linearized reading' (duration: 268.324982ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-01T19:07:31.032395Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.373985ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-01T19:07:31.032433Z","caller":"traceutil/trace.go:172","msg":"trace[1215511720] range","detail":"{range_begin:/registry/storageclasses; range_end:; response_count:0; response_revision:1125; }","duration":"138.419366ms","start":"2025-12-01T19:07:30.894003Z","end":"2025-12-01T19:07:31.032422Z","steps":["trace[1215511720] 'agreement among raft nodes before linearized reading' (duration: 138.35243ms)"],"step_count":1}
{"level":"info","ts":"2025-12-01T19:07:34.354066Z","caller":"traceutil/trace.go:172","msg":"trace[791098058] linearizableReadLoop","detail":"{readStateIndex:1179; appliedIndex:1179; }","duration":"133.783596ms","start":"2025-12-01T19:07:34.220263Z","end":"2025-12-01T19:07:34.354046Z","steps":["trace[791098058] 'read index received' (duration: 133.777623ms)","trace[791098058] 'applied index is now lower than readState.Index' (duration: 5.114µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-01T19:07:34.354352Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.069309ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-01T19:07:34.354564Z","caller":"traceutil/trace.go:172","msg":"trace[723472346] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1145; }","duration":"134.282105ms","start":"2025-12-01T19:07:34.220259Z","end":"2025-12-01T19:07:34.354541Z","steps":["trace[723472346] 'agreement among raft nodes before linearized reading' (duration: 134.00585ms)"],"step_count":1}
{"level":"info","ts":"2025-12-01T19:07:34.354395Z","caller":"traceutil/trace.go:172","msg":"trace[1910775046] transaction","detail":"{read_only:false; response_revision:1146; number_of_response:1; }","duration":"145.211146ms","start":"2025-12-01T19:07:34.209174Z","end":"2025-12-01T19:07:34.354385Z","steps":["trace[1910775046] 'process raft request' (duration: 145.130137ms)"],"step_count":1}
{"level":"info","ts":"2025-12-01T19:07:34.357251Z","caller":"traceutil/trace.go:172","msg":"trace[543788620] transaction","detail":"{read_only:false; response_revision:1147; number_of_response:1; }","duration":"108.395309ms","start":"2025-12-01T19:07:34.248846Z","end":"2025-12-01T19:07:34.357242Z","steps":["trace[543788620] 'process raft request' (duration: 108.347939ms)"],"step_count":1}
{"level":"info","ts":"2025-12-01T19:08:18.965694Z","caller":"traceutil/trace.go:172","msg":"trace[75931164] transaction","detail":"{read_only:false; response_revision:1469; number_of_response:1; }","duration":"145.955141ms","start":"2025-12-01T19:08:18.819712Z","end":"2025-12-01T19:08:18.965667Z","steps":["trace[75931164] 'process raft request' (duration: 145.87455ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-01T19:08:20.056740Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.003872ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:1 size:171"}
{"level":"info","ts":"2025-12-01T19:08:20.056793Z","caller":"traceutil/trace.go:172","msg":"trace[1192414740] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:1496; }","duration":"139.085402ms","start":"2025-12-01T19:08:19.917697Z","end":"2025-12-01T19:08:20.056783Z","steps":["trace[1192414740] 'range keys from in-memory index tree' (duration: 138.933386ms)"],"step_count":1}
{"level":"info","ts":"2025-12-01T19:08:38.304778Z","caller":"traceutil/trace.go:172","msg":"trace[629939612] transaction","detail":"{read_only:false; response_revision:1616; number_of_response:1; }","duration":"146.512493ms","start":"2025-12-01T19:08:38.158250Z","end":"2025-12-01T19:08:38.304763Z","steps":["trace[629939612] 'process raft request' (duration: 146.432108ms)"],"step_count":1}
{"level":"info","ts":"2025-12-01T19:08:39.672835Z","caller":"traceutil/trace.go:172","msg":"trace[286552751] transaction","detail":"{read_only:false; response_revision:1629; number_of_response:1; }","duration":"119.034778ms","start":"2025-12-01T19:08:39.553787Z","end":"2025-12-01T19:08:39.672822Z","steps":["trace[286552751] 'process raft request' (duration: 118.907382ms)"],"step_count":1}
==> kernel <==
19:10:49 up 5 min, 0 users, load average: 0.53, 0.93, 0.47
Linux addons-153147 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec 1 18:07:10 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02.8"
==> kube-apiserver [0eb3f768ff899570f82f5fd37af6a8386f67ee3eef54aedc3896727a240e84c9] <==
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I1201 19:07:03.121094 1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E1201 19:07:03.131091 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
E1201 19:07:52.224250 1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:37488: use of closed network connection
E1201 19:07:52.420671 1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:37502: use of closed network connection
I1201 19:08:13.512336 1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.220.6"}
I1201 19:08:19.521299 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
I1201 19:08:19.746536 1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.104.51"}
E1201 19:08:41.346916 1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
I1201 19:08:47.200909 1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I1201 19:09:04.106001 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I1201 19:09:09.778242 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1201 19:09:09.778367 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1201 19:09:09.814457 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1201 19:09:09.827820 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1201 19:09:09.858119 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1201 19:09:09.858223 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1201 19:09:09.885919 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1201 19:09:09.885997 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W1201 19:09:10.829608 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W1201 19:09:10.886036 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W1201 19:09:10.952022 1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
E1201 19:09:11.286433 1 watch.go:272] "Unhandled Error" err="client disconnected" logger="UnhandledError"
I1201 19:10:48.488144 1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.132.7"}
==> kube-controller-manager [15bfca845ba7bee915e083e47d9b379fd72cfc886332e91ec435f41a7d475400] <==
E1201 19:09:14.976494 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1201 19:09:17.825249 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1201 19:09:17.826857 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1201 19:09:18.026580 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1201 19:09:18.027787 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1201 19:09:19.621705 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1201 19:09:19.622817 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1201 19:09:27.786467 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1201 19:09:27.787725 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1201 19:09:28.318665 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1201 19:09:28.319659 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1201 19:09:29.499618 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1201 19:09:29.500829 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1201 19:09:41.994157 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1201 19:09:41.995199 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1201 19:09:43.925543 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1201 19:09:43.926502 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1201 19:09:47.807247 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1201 19:09:47.808290 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1201 19:10:20.001424 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1201 19:10:20.002597 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1201 19:10:23.345825 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1201 19:10:23.346828 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1201 19:10:28.900396 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1201 19:10:28.901506 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
==> kube-proxy [8cdfe08daf700cc46cbce31a511280cd3e8431e0915795fa962406fa7bfb703f] <==
I1201 19:06:12.217269 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1201 19:06:12.319351 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1201 19:06:12.319429 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.9"]
E1201 19:06:12.319504 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1201 19:06:12.489229 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1201 19:06:12.489294 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1201 19:06:12.489322 1 server_linux.go:132] "Using iptables Proxier"
I1201 19:06:12.529641 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1201 19:06:12.529983 1 server.go:527] "Version info" version="v1.34.2"
I1201 19:06:12.529997 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1201 19:06:12.531024 1 config.go:106] "Starting endpoint slice config controller"
I1201 19:06:12.531036 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1201 19:06:12.540526 1 config.go:200] "Starting service config controller"
I1201 19:06:12.540558 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1201 19:06:12.540897 1 config.go:403] "Starting serviceCIDR config controller"
I1201 19:06:12.540905 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1201 19:06:12.555077 1 config.go:309] "Starting node config controller"
I1201 19:06:12.555106 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1201 19:06:12.555114 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1201 19:06:12.633101 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1201 19:06:12.641432 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1201 19:06:12.641450 1 shared_informer.go:356] "Caches are synced" controller="service config"
==> kube-scheduler [b03450bd940018d595ff4bdb217255616ba7406499522b6f958ac6c5deaccb9c] <==
E1201 19:06:03.723111 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1201 19:06:03.723159 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1201 19:06:03.723229 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1201 19:06:03.723276 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1201 19:06:03.723345 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1201 19:06:03.723550 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1201 19:06:03.723669 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1201 19:06:03.723827 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1201 19:06:03.724218 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1201 19:06:03.724328 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1201 19:06:04.675237 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1201 19:06:04.679007 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1201 19:06:04.703870 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1201 19:06:04.745006 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1201 19:06:04.761308 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1201 19:06:04.824404 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1201 19:06:04.852930 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1201 19:06:04.887095 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1201 19:06:04.944071 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1201 19:06:04.964853 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1201 19:06:04.968251 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1201 19:06:04.992075 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1201 19:06:05.040878 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1201 19:06:05.074210 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
I1201 19:06:07.797713 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Dec 01 19:09:13 addons-153147 kubelet[1509]: E1201 19:09:13.020084 1509 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f922107af7ad9d3370bb307907eeae32f69accbdf1a450177580ee8d3d894eb3\": container with ID starting with f922107af7ad9d3370bb307907eeae32f69accbdf1a450177580ee8d3d894eb3 not found: ID does not exist" containerID="f922107af7ad9d3370bb307907eeae32f69accbdf1a450177580ee8d3d894eb3"
Dec 01 19:09:13 addons-153147 kubelet[1509]: I1201 19:09:13.020127 1509 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f922107af7ad9d3370bb307907eeae32f69accbdf1a450177580ee8d3d894eb3"} err="failed to get container status \"f922107af7ad9d3370bb307907eeae32f69accbdf1a450177580ee8d3d894eb3\": rpc error: code = NotFound desc = could not find container \"f922107af7ad9d3370bb307907eeae32f69accbdf1a450177580ee8d3d894eb3\": container with ID starting with f922107af7ad9d3370bb307907eeae32f69accbdf1a450177580ee8d3d894eb3 not found: ID does not exist"
Dec 01 19:09:16 addons-153147 kubelet[1509]: E1201 19:09:16.976699 1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764616156974881278 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 01 19:09:16 addons-153147 kubelet[1509]: E1201 19:09:16.976741 1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764616156974881278 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 01 19:09:26 addons-153147 kubelet[1509]: E1201 19:09:26.979227 1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764616166978657703 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 01 19:09:26 addons-153147 kubelet[1509]: E1201 19:09:26.979253 1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764616166978657703 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 01 19:09:36 addons-153147 kubelet[1509]: E1201 19:09:36.983514 1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764616176982796946 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 01 19:09:36 addons-153147 kubelet[1509]: E1201 19:09:36.983557 1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764616176982796946 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 01 19:09:46 addons-153147 kubelet[1509]: E1201 19:09:46.987477 1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764616186986784654 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 01 19:09:46 addons-153147 kubelet[1509]: E1201 19:09:46.987527 1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764616186986784654 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 01 19:09:56 addons-153147 kubelet[1509]: E1201 19:09:56.990205 1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764616196989609845 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 01 19:09:56 addons-153147 kubelet[1509]: E1201 19:09:56.990232 1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764616196989609845 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 01 19:10:01 addons-153147 kubelet[1509]: I1201 19:10:01.324593 1509 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-nh9fh" secret="" err="secret \"gcp-auth\" not found"
Dec 01 19:10:05 addons-153147 kubelet[1509]: I1201 19:10:05.324487 1509 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Dec 01 19:10:06 addons-153147 kubelet[1509]: E1201 19:10:06.993495 1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764616206993004647 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 01 19:10:06 addons-153147 kubelet[1509]: E1201 19:10:06.993570 1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764616206993004647 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 01 19:10:16 addons-153147 kubelet[1509]: E1201 19:10:16.996062 1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764616216995575510 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 01 19:10:16 addons-153147 kubelet[1509]: E1201 19:10:16.996089 1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764616216995575510 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 01 19:10:26 addons-153147 kubelet[1509]: E1201 19:10:26.999280 1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764616226998792023 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 01 19:10:26 addons-153147 kubelet[1509]: E1201 19:10:26.999309 1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764616226998792023 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 01 19:10:37 addons-153147 kubelet[1509]: E1201 19:10:37.002998 1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764616237002038910 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 01 19:10:37 addons-153147 kubelet[1509]: E1201 19:10:37.003032 1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764616237002038910 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 01 19:10:47 addons-153147 kubelet[1509]: E1201 19:10:47.005763 1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764616247005317858 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 01 19:10:47 addons-153147 kubelet[1509]: E1201 19:10:47.005806 1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764616247005317858 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
Dec 01 19:10:48 addons-153147 kubelet[1509]: I1201 19:10:48.504343 1509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxz7w\" (UniqueName: \"kubernetes.io/projected/3bf05b82-6c0e-4593-a9ca-a5ed936510a2-kube-api-access-nxz7w\") pod \"hello-world-app-5d498dc89-bp2ws\" (UID: \"3bf05b82-6c0e-4593-a9ca-a5ed936510a2\") " pod="default/hello-world-app-5d498dc89-bp2ws"
==> storage-provisioner [0bd6c211af9d77e9446c6d73e364921bbba94647263e4c21fcabc93853307404] <==
W1201 19:10:24.658346 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1201 19:10:26.661466 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1201 19:10:26.666628 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1201 19:10:28.670459 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1201 19:10:28.677008 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1201 19:10:30.681313 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1201 19:10:30.690114 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1201 19:10:32.693711 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1201 19:10:32.702520 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1201 19:10:34.706656 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1201 19:10:34.712194 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1201 19:10:36.716064 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1201 19:10:36.721118 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1201 19:10:38.725030 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1201 19:10:38.733925 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1201 19:10:40.736825 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1201 19:10:40.742586 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1201 19:10:42.746235 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1201 19:10:42.751240 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1201 19:10:44.755144 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1201 19:10:44.760380 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1201 19:10:46.763841 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1201 19:10:46.769406 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1201 19:10:48.776744 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1201 19:10:48.792270 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-153147 -n addons-153147
helpers_test.go:269: (dbg) Run: kubectl --context addons-153147 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-bp2ws ingress-nginx-admission-create-8l42q ingress-nginx-admission-patch-slpzw
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context addons-153147 describe pod hello-world-app-5d498dc89-bp2ws ingress-nginx-admission-create-8l42q ingress-nginx-admission-patch-slpzw
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-153147 describe pod hello-world-app-5d498dc89-bp2ws ingress-nginx-admission-create-8l42q ingress-nginx-admission-patch-slpzw: exit status 1 (83.794844ms)
-- stdout --
Name: hello-world-app-5d498dc89-bp2ws
Namespace: default
Priority: 0
Service Account: default
Node: addons-153147/192.168.39.9
Start Time: Mon, 01 Dec 2025 19:10:48 +0000
Labels: app=hello-world-app
pod-template-hash=5d498dc89
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-world-app-5d498dc89
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nxz7w (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-nxz7w:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2s default-scheduler Successfully assigned default/hello-world-app-5d498dc89-bp2ws to addons-153147
Normal Pulling 1s kubelet Pulling image "docker.io/kicbase/echo-server:1.0"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-8l42q" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-slpzw" not found
** /stderr **
helpers_test.go:287: kubectl --context addons-153147 describe pod hello-world-app-5d498dc89-bp2ws ingress-nginx-admission-create-8l42q ingress-nginx-admission-patch-slpzw: exit status 1
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-153147 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-153147 addons disable ingress-dns --alsologtostderr -v=1: (1.033739436s)
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-153147 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-153147 addons disable ingress --alsologtostderr -v=1: (7.781734517s)
--- FAIL: TestAddons/parallel/Ingress (160.29s)