=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run: kubectl --context addons-663794 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run: kubectl --context addons-663794 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run: kubectl --context addons-663794 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [2b013a75-814d-4176-8b62-830d8b345b7c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [2b013a75-814d-4176-8b62-830d8b345b7c] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004564729s
I1115 09:09:41.129832 247445 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run: out/minikube-linux-amd64 -p addons-663794 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-663794 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.203278805s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run: kubectl --context addons-663794 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run: out/minikube-linux-amd64 -p addons-663794 ip
addons_test.go:299: (dbg) Run: nslookup hello-john.test 192.168.39.78
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-663794 -n addons-663794
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p addons-663794 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-663794 logs -n 25: (1.240755586s)
helpers_test.go:260: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ delete │ -p download-only-071043 │ download-only-071043 │ jenkins │ v1.37.0 │ 15 Nov 25 09:06 UTC │ 15 Nov 25 09:06 UTC │
│ start │ --download-only -p binary-mirror-042783 --alsologtostderr --binary-mirror http://127.0.0.1:43911 --driver=kvm2 --container-runtime=crio │ binary-mirror-042783 │ jenkins │ v1.37.0 │ 15 Nov 25 09:06 UTC │ │
│ delete │ -p binary-mirror-042783 │ binary-mirror-042783 │ jenkins │ v1.37.0 │ 15 Nov 25 09:06 UTC │ 15 Nov 25 09:06 UTC │
│ addons │ disable dashboard -p addons-663794 │ addons-663794 │ jenkins │ v1.37.0 │ 15 Nov 25 09:06 UTC │ │
│ addons │ enable dashboard -p addons-663794 │ addons-663794 │ jenkins │ v1.37.0 │ 15 Nov 25 09:06 UTC │ │
│ start │ -p addons-663794 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-663794 │ jenkins │ v1.37.0 │ 15 Nov 25 09:06 UTC │ 15 Nov 25 09:08 UTC │
│ addons │ addons-663794 addons disable volcano --alsologtostderr -v=1 │ addons-663794 │ jenkins │ v1.37.0 │ 15 Nov 25 09:08 UTC │ 15 Nov 25 09:08 UTC │
│ addons │ addons-663794 addons disable gcp-auth --alsologtostderr -v=1 │ addons-663794 │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
│ addons │ enable headlamp -p addons-663794 --alsologtostderr -v=1 │ addons-663794 │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
│ addons │ addons-663794 addons disable metrics-server --alsologtostderr -v=1 │ addons-663794 │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
│ addons │ addons-663794 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-663794 │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
│ addons │ addons-663794 addons disable yakd --alsologtostderr -v=1 │ addons-663794 │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
│ addons │ addons-663794 addons disable headlamp --alsologtostderr -v=1 │ addons-663794 │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
│ ip │ addons-663794 ip │ addons-663794 │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
│ addons │ addons-663794 addons disable registry --alsologtostderr -v=1 │ addons-663794 │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
│ addons │ addons-663794 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-663794 │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
│ ssh │ addons-663794 ssh cat /opt/local-path-provisioner/pvc-7cb226ef-cf3e-40d0-abc8-3408242d700f_default_test-pvc/file1 │ addons-663794 │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
│ addons │ addons-663794 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-663794 │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
│ addons │ addons-663794 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-663794 │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
│ ssh │ addons-663794 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-663794 │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-663794 │ addons-663794 │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
│ addons │ addons-663794 addons disable registry-creds --alsologtostderr -v=1 │ addons-663794 │ jenkins │ v1.37.0 │ 15 Nov 25 09:09 UTC │ 15 Nov 25 09:09 UTC │
│ addons │ addons-663794 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-663794 │ jenkins │ v1.37.0 │ 15 Nov 25 09:10 UTC │ 15 Nov 25 09:10 UTC │
│ addons │ addons-663794 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-663794 │ jenkins │ v1.37.0 │ 15 Nov 25 09:10 UTC │ 15 Nov 25 09:10 UTC │
│ ip │ addons-663794 ip │ addons-663794 │ jenkins │ v1.37.0 │ 15 Nov 25 09:11 UTC │ 15 Nov 25 09:11 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/11/15 09:06:41
Running on machine: ubuntu-20-agent-9
Binary: Built with gc go1.24.6 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1115 09:06:41.546864 248107 out.go:360] Setting OutFile to fd 1 ...
I1115 09:06:41.546980 248107 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:06:41.546991 248107 out.go:374] Setting ErrFile to fd 2...
I1115 09:06:41.546997 248107 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:06:41.547214 248107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21895-243545/.minikube/bin
I1115 09:06:41.547812 248107 out.go:368] Setting JSON to false
I1115 09:06:41.548747 248107 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6544,"bootTime":1763191058,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1115 09:06:41.548838 248107 start.go:143] virtualization: kvm guest
I1115 09:06:41.550687 248107 out.go:179] * [addons-663794] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1115 09:06:41.552004 248107 notify.go:221] Checking for updates...
I1115 09:06:41.552009 248107 out.go:179] - MINIKUBE_LOCATION=21895
I1115 09:06:41.553220 248107 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1115 09:06:41.554361 248107 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21895-243545/kubeconfig
I1115 09:06:41.555550 248107 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21895-243545/.minikube
I1115 09:06:41.556604 248107 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1115 09:06:41.557555 248107 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1115 09:06:41.558936 248107 driver.go:422] Setting default libvirt URI to qemu:///system
I1115 09:06:41.588056 248107 out.go:179] * Using the kvm2 driver based on user configuration
I1115 09:06:41.589094 248107 start.go:309] selected driver: kvm2
I1115 09:06:41.589113 248107 start.go:930] validating driver "kvm2" against <nil>
I1115 09:06:41.589135 248107 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1115 09:06:41.590145 248107 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1115 09:06:41.590489 248107 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1115 09:06:41.590529 248107 cni.go:84] Creating CNI manager for ""
I1115 09:06:41.590590 248107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1115 09:06:41.590602 248107 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1115 09:06:41.590660 248107 start.go:353] cluster config:
{Name:addons-663794 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-663794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
I1115 09:06:41.590783 248107 iso.go:125] acquiring lock: {Name:mkff40ddaa37657d9e8283719561f1fce12069ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1115 09:06:41.592375 248107 out.go:179] * Starting "addons-663794" primary control-plane node in "addons-663794" cluster
I1115 09:06:41.593483 248107 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1115 09:06:41.593517 248107 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21895-243545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
I1115 09:06:41.593546 248107 cache.go:65] Caching tarball of preloaded images
I1115 09:06:41.593646 248107 preload.go:238] Found /home/jenkins/minikube-integration/21895-243545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1115 09:06:41.593661 248107 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
I1115 09:06:41.594034 248107 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/config.json ...
I1115 09:06:41.594060 248107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/config.json: {Name:mk4981e3557a8519da971ebcf18fd803355391a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1115 09:06:41.594212 248107 start.go:360] acquireMachinesLock for addons-663794: {Name:mkd96327c544e60a7a5bc14d0ad542aaa69bb5ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1115 09:06:41.594282 248107 start.go:364] duration metric: took 52.172µs to acquireMachinesLock for "addons-663794"
I1115 09:06:41.594308 248107 start.go:93] Provisioning new machine with config: &{Name:addons-663794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-663794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I1115 09:06:41.594354 248107 start.go:125] createHost starting for "" (driver="kvm2")
I1115 09:06:41.595706 248107 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1115 09:06:41.595869 248107 start.go:159] libmachine.API.Create for "addons-663794" (driver="kvm2")
I1115 09:06:41.595900 248107 client.go:173] LocalClient.Create starting
I1115 09:06:41.596000 248107 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21895-243545/.minikube/certs/ca.pem
I1115 09:06:41.963260 248107 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21895-243545/.minikube/certs/cert.pem
I1115 09:06:42.123428 248107 main.go:143] libmachine: creating domain...
I1115 09:06:42.123475 248107 main.go:143] libmachine: creating network...
I1115 09:06:42.124845 248107 main.go:143] libmachine: found existing default network
I1115 09:06:42.125078 248107 main.go:143] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1115 09:06:42.125602 248107 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001daca50}
I1115 09:06:42.125705 248107 main.go:143] libmachine: defining private network:
<network>
<name>mk-addons-663794</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1115 09:06:42.131411 248107 main.go:143] libmachine: creating private network mk-addons-663794 192.168.39.0/24...
I1115 09:06:42.199869 248107 main.go:143] libmachine: private network mk-addons-663794 192.168.39.0/24 created
I1115 09:06:42.200168 248107 main.go:143] libmachine: <network>
<name>mk-addons-663794</name>
<uuid>a9ac5aa2-0830-4fea-9e79-794861abb986</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:9b:7f:82'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1115 09:06:42.200201 248107 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794 ...
I1115 09:06:42.200224 248107 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21895-243545/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso
I1115 09:06:42.200240 248107 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21895-243545/.minikube
I1115 09:06:42.200308 248107 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21895-243545/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21895-243545/.minikube/cache/iso/amd64/minikube-v1.37.0-1762018871-21834-amd64.iso...
I1115 09:06:42.480300 248107 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa...
I1115 09:06:42.650903 248107 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/addons-663794.rawdisk...
I1115 09:06:42.650950 248107 main.go:143] libmachine: Writing magic tar header
I1115 09:06:42.650971 248107 main.go:143] libmachine: Writing SSH key tar header
I1115 09:06:42.651037 248107 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794 ...
I1115 09:06:42.651103 248107 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794
I1115 09:06:42.651130 248107 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794 (perms=drwx------)
I1115 09:06:42.651140 248107 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21895-243545/.minikube/machines
I1115 09:06:42.651150 248107 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21895-243545/.minikube/machines (perms=drwxr-xr-x)
I1115 09:06:42.651160 248107 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21895-243545/.minikube
I1115 09:06:42.651169 248107 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21895-243545/.minikube (perms=drwxr-xr-x)
I1115 09:06:42.651178 248107 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21895-243545
I1115 09:06:42.651190 248107 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21895-243545 (perms=drwxrwxr-x)
I1115 09:06:42.651201 248107 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1115 09:06:42.651212 248107 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1115 09:06:42.651222 248107 main.go:143] libmachine: checking permissions on dir: /home/jenkins
I1115 09:06:42.651232 248107 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1115 09:06:42.651242 248107 main.go:143] libmachine: checking permissions on dir: /home
I1115 09:06:42.651251 248107 main.go:143] libmachine: skipping /home - not owner
I1115 09:06:42.651254 248107 main.go:143] libmachine: defining domain...
I1115 09:06:42.652533 248107 main.go:143] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-663794</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/addons-663794.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-663794'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1115 09:06:42.657733 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:5e:74:5d in network default
I1115 09:06:42.658286 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:42.658301 248107 main.go:143] libmachine: starting domain...
I1115 09:06:42.658306 248107 main.go:143] libmachine: ensuring networks are active...
I1115 09:06:42.659098 248107 main.go:143] libmachine: Ensuring network default is active
I1115 09:06:42.659520 248107 main.go:143] libmachine: Ensuring network mk-addons-663794 is active
I1115 09:06:42.660134 248107 main.go:143] libmachine: getting domain XML...
I1115 09:06:42.661238 248107 main.go:143] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-663794</name>
<uuid>39d04125-32a9-467f-ac20-4c898cc459d3</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/addons-663794.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:40:3c:f2'/>
<source network='mk-addons-663794'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:5e:74:5d'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1115 09:06:43.919486 248107 main.go:143] libmachine: waiting for domain to start...
I1115 09:06:43.920783 248107 main.go:143] libmachine: domain is now running
I1115 09:06:43.920800 248107 main.go:143] libmachine: waiting for IP...
I1115 09:06:43.921491 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:43.921939 248107 main.go:143] libmachine: no network interface addresses found for domain addons-663794 (source=lease)
I1115 09:06:43.921953 248107 main.go:143] libmachine: trying to list again with source=arp
I1115 09:06:43.922176 248107 main.go:143] libmachine: unable to find current IP address of domain addons-663794 in network mk-addons-663794 (interfaces detected: [])
I1115 09:06:43.922237 248107 retry.go:31] will retry after 239.594725ms: waiting for domain to come up
I1115 09:06:44.163728 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:44.164336 248107 main.go:143] libmachine: no network interface addresses found for domain addons-663794 (source=lease)
I1115 09:06:44.164356 248107 main.go:143] libmachine: trying to list again with source=arp
I1115 09:06:44.164701 248107 main.go:143] libmachine: unable to find current IP address of domain addons-663794 in network mk-addons-663794 (interfaces detected: [])
I1115 09:06:44.164747 248107 retry.go:31] will retry after 362.377021ms: waiting for domain to come up
I1115 09:06:44.528189 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:44.528763 248107 main.go:143] libmachine: no network interface addresses found for domain addons-663794 (source=lease)
I1115 09:06:44.528780 248107 main.go:143] libmachine: trying to list again with source=arp
I1115 09:06:44.529050 248107 main.go:143] libmachine: unable to find current IP address of domain addons-663794 in network mk-addons-663794 (interfaces detected: [])
I1115 09:06:44.529089 248107 retry.go:31] will retry after 430.148195ms: waiting for domain to come up
I1115 09:06:44.960493 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:44.961042 248107 main.go:143] libmachine: no network interface addresses found for domain addons-663794 (source=lease)
I1115 09:06:44.961059 248107 main.go:143] libmachine: trying to list again with source=arp
I1115 09:06:44.961336 248107 main.go:143] libmachine: unable to find current IP address of domain addons-663794 in network mk-addons-663794 (interfaces detected: [])
I1115 09:06:44.961370 248107 retry.go:31] will retry after 496.012903ms: waiting for domain to come up
I1115 09:06:45.459109 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:45.459736 248107 main.go:143] libmachine: no network interface addresses found for domain addons-663794 (source=lease)
I1115 09:06:45.459754 248107 main.go:143] libmachine: trying to list again with source=arp
I1115 09:06:45.460091 248107 main.go:143] libmachine: unable to find current IP address of domain addons-663794 in network mk-addons-663794 (interfaces detected: [])
I1115 09:06:45.460140 248107 retry.go:31] will retry after 627.192444ms: waiting for domain to come up
I1115 09:06:46.088954 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:46.089579 248107 main.go:143] libmachine: no network interface addresses found for domain addons-663794 (source=lease)
I1115 09:06:46.089595 248107 main.go:143] libmachine: trying to list again with source=arp
I1115 09:06:46.089930 248107 main.go:143] libmachine: unable to find current IP address of domain addons-663794 in network mk-addons-663794 (interfaces detected: [])
I1115 09:06:46.089963 248107 retry.go:31] will retry after 677.793638ms: waiting for domain to come up
I1115 09:06:46.768982 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:46.769589 248107 main.go:143] libmachine: no network interface addresses found for domain addons-663794 (source=lease)
I1115 09:06:46.769601 248107 main.go:143] libmachine: trying to list again with source=arp
I1115 09:06:46.769937 248107 main.go:143] libmachine: unable to find current IP address of domain addons-663794 in network mk-addons-663794 (interfaces detected: [])
I1115 09:06:46.769976 248107 retry.go:31] will retry after 1.101499246s: waiting for domain to come up
I1115 09:06:47.873250 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:47.873818 248107 main.go:143] libmachine: no network interface addresses found for domain addons-663794 (source=lease)
I1115 09:06:47.873836 248107 main.go:143] libmachine: trying to list again with source=arp
I1115 09:06:47.874118 248107 main.go:143] libmachine: unable to find current IP address of domain addons-663794 in network mk-addons-663794 (interfaces detected: [])
I1115 09:06:47.874157 248107 retry.go:31] will retry after 1.167236905s: waiting for domain to come up
I1115 09:06:49.043143 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:49.043842 248107 main.go:143] libmachine: no network interface addresses found for domain addons-663794 (source=lease)
I1115 09:06:49.043865 248107 main.go:143] libmachine: trying to list again with source=arp
I1115 09:06:49.044260 248107 main.go:143] libmachine: unable to find current IP address of domain addons-663794 in network mk-addons-663794 (interfaces detected: [])
I1115 09:06:49.044308 248107 retry.go:31] will retry after 1.619569537s: waiting for domain to come up
I1115 09:06:50.666152 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:50.666735 248107 main.go:143] libmachine: no network interface addresses found for domain addons-663794 (source=lease)
I1115 09:06:50.666751 248107 main.go:143] libmachine: trying to list again with source=arp
I1115 09:06:50.667031 248107 main.go:143] libmachine: unable to find current IP address of domain addons-663794 in network mk-addons-663794 (interfaces detected: [])
I1115 09:06:50.667069 248107 retry.go:31] will retry after 1.790503798s: waiting for domain to come up
I1115 09:06:52.459395 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:52.460021 248107 main.go:143] libmachine: no network interface addresses found for domain addons-663794 (source=lease)
I1115 09:06:52.460047 248107 main.go:143] libmachine: trying to list again with source=arp
I1115 09:06:52.460410 248107 main.go:143] libmachine: unable to find current IP address of domain addons-663794 in network mk-addons-663794 (interfaces detected: [])
I1115 09:06:52.460471 248107 retry.go:31] will retry after 2.798447952s: waiting for domain to come up
I1115 09:06:55.262422 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:55.266122 248107 main.go:143] libmachine: no network interface addresses found for domain addons-663794 (source=lease)
I1115 09:06:55.266152 248107 main.go:143] libmachine: trying to list again with source=arp
I1115 09:06:55.266515 248107 main.go:143] libmachine: unable to find current IP address of domain addons-663794 in network mk-addons-663794 (interfaces detected: [])
I1115 09:06:55.266560 248107 retry.go:31] will retry after 2.822652152s: waiting for domain to come up
I1115 09:06:58.091739 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:58.092322 248107 main.go:143] libmachine: domain addons-663794 has current primary IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:58.092334 248107 main.go:143] libmachine: found domain IP: 192.168.39.78
I1115 09:06:58.092342 248107 main.go:143] libmachine: reserving static IP address...
I1115 09:06:58.092717 248107 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-663794", mac: "52:54:00:40:3c:f2", ip: "192.168.39.78"} in network mk-addons-663794
I1115 09:06:58.276722 248107 main.go:143] libmachine: reserved static IP address 192.168.39.78 for domain addons-663794
I1115 09:06:58.276751 248107 main.go:143] libmachine: waiting for SSH...
I1115 09:06:58.276757 248107 main.go:143] libmachine: Getting to WaitForSSH function...
I1115 09:06:58.280280 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:58.280855 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:minikube Clientid:01:52:54:00:40:3c:f2}
I1115 09:06:58.280893 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:58.281128 248107 main.go:143] libmachine: Using SSH client type: native
I1115 09:06:58.281438 248107 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil> [] 0s} 192.168.39.78 22 <nil> <nil>}
I1115 09:06:58.281470 248107 main.go:143] libmachine: About to run SSH command:
exit 0
I1115 09:06:58.421301 248107 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1115 09:06:58.421695 248107 main.go:143] libmachine: domain creation complete
I1115 09:06:58.422991 248107 machine.go:94] provisionDockerMachine start ...
I1115 09:06:58.425372 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:58.425775 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:06:58.425821 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:58.426013 248107 main.go:143] libmachine: Using SSH client type: native
I1115 09:06:58.426223 248107 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil> [] 0s} 192.168.39.78 22 <nil> <nil>}
I1115 09:06:58.426235 248107 main.go:143] libmachine: About to run SSH command:
hostname
I1115 09:06:58.538245 248107 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
I1115 09:06:58.538282 248107 buildroot.go:166] provisioning hostname "addons-663794"
I1115 09:06:58.541050 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:58.541417 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:06:58.541463 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:58.541627 248107 main.go:143] libmachine: Using SSH client type: native
I1115 09:06:58.541840 248107 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil> [] 0s} 192.168.39.78 22 <nil> <nil>}
I1115 09:06:58.541855 248107 main.go:143] libmachine: About to run SSH command:
sudo hostname addons-663794 && echo "addons-663794" | sudo tee /etc/hostname
I1115 09:06:58.668477 248107 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-663794
I1115 09:06:58.671319 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:58.671744 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:06:58.671769 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:58.671950 248107 main.go:143] libmachine: Using SSH client type: native
I1115 09:06:58.672143 248107 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil> [] 0s} 192.168.39.78 22 <nil> <nil>}
I1115 09:06:58.672165 248107 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-663794' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-663794/g' /etc/hosts;
else
echo '127.0.1.1 addons-663794' | sudo tee -a /etc/hosts;
fi
fi
I1115 09:06:58.790822 248107 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1115 09:06:58.790863 248107 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21895-243545/.minikube CaCertPath:/home/jenkins/minikube-integration/21895-243545/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21895-243545/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21895-243545/.minikube}
I1115 09:06:58.790934 248107 buildroot.go:174] setting up certificates
I1115 09:06:58.790950 248107 provision.go:84] configureAuth start
I1115 09:06:58.794550 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:58.795050 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:06:58.795078 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:58.797323 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:58.797776 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:06:58.797796 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:58.797926 248107 provision.go:143] copyHostCerts
I1115 09:06:58.797994 248107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-243545/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21895-243545/.minikube/cert.pem (1123 bytes)
I1115 09:06:58.798105 248107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-243545/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21895-243545/.minikube/key.pem (1675 bytes)
I1115 09:06:58.798174 248107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21895-243545/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21895-243545/.minikube/ca.pem (1082 bytes)
I1115 09:06:58.798227 248107 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21895-243545/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21895-243545/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21895-243545/.minikube/certs/ca-key.pem org=jenkins.addons-663794 san=[127.0.0.1 192.168.39.78 addons-663794 localhost minikube]
I1115 09:06:58.991943 248107 provision.go:177] copyRemoteCerts
I1115 09:06:58.992015 248107 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1115 09:06:58.994620 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:58.994949 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:06:58.994969 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:58.995155 248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
I1115 09:06:59.081836 248107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1115 09:06:59.118959 248107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1115 09:06:59.154779 248107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1115 09:06:59.184147 248107 provision.go:87] duration metric: took 393.17264ms to configureAuth
I1115 09:06:59.184188 248107 buildroot.go:189] setting minikube options for container-runtime
I1115 09:06:59.184384 248107 config.go:182] Loaded profile config "addons-663794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:06:59.187291 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:59.187725 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:06:59.187752 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:59.187912 248107 main.go:143] libmachine: Using SSH client type: native
I1115 09:06:59.188111 248107 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil> [] 0s} 192.168.39.78 22 <nil> <nil>}
I1115 09:06:59.188126 248107 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1115 09:06:59.437357 248107 main.go:143] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1115 09:06:59.437386 248107 machine.go:97] duration metric: took 1.014375288s to provisionDockerMachine
I1115 09:06:59.437399 248107 client.go:176] duration metric: took 17.841489233s to LocalClient.Create
I1115 09:06:59.437418 248107 start.go:167] duration metric: took 17.84154843s to libmachine.API.Create "addons-663794"
I1115 09:06:59.437428 248107 start.go:293] postStartSetup for "addons-663794" (driver="kvm2")
I1115 09:06:59.437453 248107 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1115 09:06:59.437539 248107 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1115 09:06:59.440660 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:59.441163 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:06:59.441197 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:59.441375 248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
I1115 09:06:59.526263 248107 ssh_runner.go:195] Run: cat /etc/os-release
I1115 09:06:59.531006 248107 info.go:137] Remote host: Buildroot 2025.02
I1115 09:06:59.531030 248107 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-243545/.minikube/addons for local assets ...
I1115 09:06:59.531120 248107 filesync.go:126] Scanning /home/jenkins/minikube-integration/21895-243545/.minikube/files for local assets ...
I1115 09:06:59.531166 248107 start.go:296] duration metric: took 93.726582ms for postStartSetup
I1115 09:06:59.534046 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:59.534421 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:06:59.534462 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:59.534687 248107 profile.go:143] Saving config to /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/config.json ...
I1115 09:06:59.534885 248107 start.go:128] duration metric: took 17.940518976s to createHost
I1115 09:06:59.536964 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:59.537419 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:06:59.537494 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:59.537707 248107 main.go:143] libmachine: Using SSH client type: native
I1115 09:06:59.537933 248107 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil> [] 0s} 192.168.39.78 22 <nil> <nil>}
I1115 09:06:59.537956 248107 main.go:143] libmachine: About to run SSH command:
date +%s.%N
I1115 09:06:59.645834 248107 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763197619.602217728
I1115 09:06:59.645865 248107 fix.go:216] guest clock: 1763197619.602217728
I1115 09:06:59.645872 248107 fix.go:229] Guest: 2025-11-15 09:06:59.602217728 +0000 UTC Remote: 2025-11-15 09:06:59.53489904 +0000 UTC m=+18.035095309 (delta=67.318688ms)
I1115 09:06:59.645888 248107 fix.go:200] guest clock delta is within tolerance: 67.318688ms
I1115 09:06:59.645893 248107 start.go:83] releasing machines lock for "addons-663794", held for 18.051598507s
I1115 09:06:59.648983 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:59.649502 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:06:59.649540 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:59.650144 248107 ssh_runner.go:195] Run: cat /version.json
I1115 09:06:59.650226 248107 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1115 09:06:59.653475 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:59.653674 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:59.653900 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:06:59.653925 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:59.654064 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:06:59.654067 248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
I1115 09:06:59.654093 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:06:59.654290 248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
I1115 09:06:59.733122 248107 ssh_runner.go:195] Run: systemctl --version
I1115 09:06:59.763753 248107 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1115 09:06:59.922347 248107 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1115 09:06:59.928971 248107 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1115 09:06:59.929041 248107 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1115 09:06:59.949165 248107 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1115 09:06:59.949214 248107 start.go:496] detecting cgroup driver to use...
I1115 09:06:59.949294 248107 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1115 09:06:59.968637 248107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1115 09:06:59.985036 248107 docker.go:218] disabling cri-docker service (if available) ...
I1115 09:06:59.985124 248107 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1115 09:07:00.002145 248107 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1115 09:07:00.018433 248107 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1115 09:07:00.165090 248107 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1115 09:07:00.378483 248107 docker.go:234] disabling docker service ...
I1115 09:07:00.378556 248107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1115 09:07:00.396010 248107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1115 09:07:00.410656 248107 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1115 09:07:00.582889 248107 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1115 09:07:00.729737 248107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1115 09:07:00.746127 248107 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1115 09:07:00.768365 248107 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1115 09:07:00.768440 248107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1115 09:07:00.780573 248107 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1115 09:07:00.780680 248107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1115 09:07:00.793379 248107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1115 09:07:00.807346 248107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1115 09:07:00.820241 248107 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1115 09:07:00.833951 248107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1115 09:07:00.846402 248107 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1115 09:07:00.866167 248107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1115 09:07:00.878800 248107 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1115 09:07:00.889148 248107 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1115 09:07:00.889223 248107 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1115 09:07:00.908703 248107 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1115 09:07:00.920326 248107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1115 09:07:01.052502 248107 ssh_runner.go:195] Run: sudo systemctl restart crio
I1115 09:07:01.158647 248107 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1115 09:07:01.158749 248107 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1115 09:07:01.163765 248107 start.go:564] Will wait 60s for crictl version
I1115 09:07:01.163867 248107 ssh_runner.go:195] Run: which crictl
I1115 09:07:01.167823 248107 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1115 09:07:01.208507 248107 start.go:580] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1115 09:07:01.208598 248107 ssh_runner.go:195] Run: crio --version
I1115 09:07:01.236835 248107 ssh_runner.go:195] Run: crio --version
I1115 09:07:01.269285 248107 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
I1115 09:07:01.273203 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:01.273666 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:07:01.273696 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:01.273885 248107 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1115 09:07:01.278502 248107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1115 09:07:01.293649 248107 kubeadm.go:884] updating cluster {Name:addons-663794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-663794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1115 09:07:01.293756 248107 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1115 09:07:01.293797 248107 ssh_runner.go:195] Run: sudo crictl images --output json
I1115 09:07:01.329320 248107 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
I1115 09:07:01.329399 248107 ssh_runner.go:195] Run: which lz4
I1115 09:07:01.333687 248107 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1115 09:07:01.338288 248107 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1115 09:07:01.338318 248107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
I1115 09:07:02.657257 248107 crio.go:462] duration metric: took 1.323614187s to copy over tarball
I1115 09:07:02.657335 248107 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1115 09:07:04.208081 248107 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.55072014s)
I1115 09:07:04.208106 248107 crio.go:469] duration metric: took 1.550816094s to extract the tarball
I1115 09:07:04.208114 248107 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1115 09:07:04.248160 248107 ssh_runner.go:195] Run: sudo crictl images --output json
I1115 09:07:04.296016 248107 crio.go:514] all images are preloaded for cri-o runtime.
I1115 09:07:04.296040 248107 cache_images.go:86] Images are preloaded, skipping loading
I1115 09:07:04.296048 248107 kubeadm.go:935] updating node { 192.168.39.78 8443 v1.34.1 crio true true} ...
I1115 09:07:04.296149 248107 kubeadm.go:947] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-663794 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.78
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:addons-663794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1115 09:07:04.296216 248107 ssh_runner.go:195] Run: crio config
I1115 09:07:04.340736 248107 cni.go:84] Creating CNI manager for ""
I1115 09:07:04.340780 248107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1115 09:07:04.340805 248107 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1115 09:07:04.340840 248107 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.78 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-663794 NodeName:addons-663794 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1115 09:07:04.341031 248107 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.78
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-663794"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.78"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.78"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1115 09:07:04.341115 248107 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1115 09:07:04.352538 248107 binaries.go:51] Found k8s binaries, skipping transfer
I1115 09:07:04.352613 248107 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1115 09:07:04.365470 248107 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I1115 09:07:04.387028 248107 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1115 09:07:04.407121 248107 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
I1115 09:07:04.426602 248107 ssh_runner.go:195] Run: grep 192.168.39.78 control-plane.minikube.internal$ /etc/hosts
I1115 09:07:04.430681 248107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.78 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1115 09:07:04.444673 248107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1115 09:07:04.582233 248107 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1115 09:07:04.612886 248107 certs.go:69] Setting up /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794 for IP: 192.168.39.78
I1115 09:07:04.612907 248107 certs.go:195] generating shared ca certs ...
I1115 09:07:04.612924 248107 certs.go:227] acquiring lock for ca certs: {Name:mk5e9c8388448c40ecbfe3d7332e5965c3ae4b4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1115 09:07:04.613114 248107 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21895-243545/.minikube/ca.key
I1115 09:07:04.886492 248107 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-243545/.minikube/ca.crt ...
I1115 09:07:04.886525 248107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-243545/.minikube/ca.crt: {Name:mk716662fde1df6affa6446a5e91abc5c8085d58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1115 09:07:04.886737 248107 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-243545/.minikube/ca.key ...
I1115 09:07:04.886751 248107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-243545/.minikube/ca.key: {Name:mk43adaff6151548c227d0b30489e49a7901a10b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1115 09:07:04.886843 248107 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21895-243545/.minikube/proxy-client-ca.key
I1115 09:07:05.192768 248107 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-243545/.minikube/proxy-client-ca.crt ...
I1115 09:07:05.192807 248107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-243545/.minikube/proxy-client-ca.crt: {Name:mkae4e4311952cda911f41d7a2357cfe0b8cdbf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1115 09:07:05.192993 248107 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-243545/.minikube/proxy-client-ca.key ...
I1115 09:07:05.193007 248107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-243545/.minikube/proxy-client-ca.key: {Name:mk6a082586b2c55a45c718f609b69033934617eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1115 09:07:05.193096 248107 certs.go:257] generating profile certs ...
I1115 09:07:05.193160 248107 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.key
I1115 09:07:05.193185 248107 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt with IP's: []
I1115 09:07:05.273214 248107 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt ...
I1115 09:07:05.273246 248107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.crt: {Name:mkff9cbf722a83a5166951c6a00c0dd7ae3051a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1115 09:07:05.273409 248107 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.key ...
I1115 09:07:05.273421 248107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/client.key: {Name:mk1e2f88357869296dc30c00ecf355d769532b8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1115 09:07:05.273503 248107 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/apiserver.key.8101946b
I1115 09:07:05.273522 248107 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/apiserver.crt.8101946b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.78]
I1115 09:07:05.333253 248107 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/apiserver.crt.8101946b ...
I1115 09:07:05.333284 248107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/apiserver.crt.8101946b: {Name:mkfe20529a56d056e474711d95ffc98e9dffd8d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1115 09:07:05.333455 248107 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/apiserver.key.8101946b ...
I1115 09:07:05.333468 248107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/apiserver.key.8101946b: {Name:mke32129a2275fb1044c3b2819e0014da7333d51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1115 09:07:05.333543 248107 certs.go:382] copying /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/apiserver.crt.8101946b -> /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/apiserver.crt
I1115 09:07:05.333617 248107 certs.go:386] copying /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/apiserver.key.8101946b -> /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/apiserver.key
I1115 09:07:05.333664 248107 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/proxy-client.key
I1115 09:07:05.333682 248107 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/proxy-client.crt with IP's: []
I1115 09:07:05.451343 248107 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/proxy-client.crt ...
I1115 09:07:05.451374 248107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/proxy-client.crt: {Name:mk7d0bfb9bbe7381b8c5f53d09c41020c3e45f51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1115 09:07:05.451556 248107 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/proxy-client.key ...
I1115 09:07:05.451572 248107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/proxy-client.key: {Name:mk2f123e88e944188fc34e55170d3285e7b9191b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1115 09:07:05.451747 248107 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-243545/.minikube/certs/ca-key.pem (1675 bytes)
I1115 09:07:05.451782 248107 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-243545/.minikube/certs/ca.pem (1082 bytes)
I1115 09:07:05.451805 248107 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-243545/.minikube/certs/cert.pem (1123 bytes)
I1115 09:07:05.451834 248107 certs.go:484] found cert: /home/jenkins/minikube-integration/21895-243545/.minikube/certs/key.pem (1675 bytes)
I1115 09:07:05.452372 248107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1115 09:07:05.497566 248107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1115 09:07:05.532544 248107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1115 09:07:05.561477 248107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1115 09:07:05.589159 248107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1115 09:07:05.617842 248107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1115 09:07:05.647537 248107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1115 09:07:05.676382 248107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/profiles/addons-663794/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1115 09:07:05.704578 248107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21895-243545/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1115 09:07:05.733288 248107 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1115 09:07:05.753028 248107 ssh_runner.go:195] Run: openssl version
I1115 09:07:05.759710 248107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1115 09:07:05.772349 248107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1115 09:07:05.777309 248107 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 15 09:07 /usr/share/ca-certificates/minikubeCA.pem
I1115 09:07:05.777376 248107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1115 09:07:05.784754 248107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1115 09:07:05.797631 248107 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1115 09:07:05.802359 248107 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1115 09:07:05.802421 248107 kubeadm.go:401] StartCluster: {Name:addons-663794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-663794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1115 09:07:05.802552 248107 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1115 09:07:05.802616 248107 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1115 09:07:05.842438 248107 cri.go:89] found id: ""
I1115 09:07:05.842536 248107 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1115 09:07:05.854429 248107 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1115 09:07:05.866197 248107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1115 09:07:05.877700 248107 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1115 09:07:05.877725 248107 kubeadm.go:158] found existing configuration files:
I1115 09:07:05.877774 248107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1115 09:07:05.888409 248107 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1115 09:07:05.888487 248107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1115 09:07:05.899499 248107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1115 09:07:05.909878 248107 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1115 09:07:05.909943 248107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1115 09:07:05.921999 248107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1115 09:07:05.932397 248107 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1115 09:07:05.932489 248107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1115 09:07:05.944349 248107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1115 09:07:05.955674 248107 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1115 09:07:05.955757 248107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1115 09:07:05.967200 248107 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1115 09:07:06.137144 248107 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1115 09:07:18.006164 248107 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
I1115 09:07:18.006241 248107 kubeadm.go:319] [preflight] Running pre-flight checks
I1115 09:07:18.006349 248107 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1115 09:07:18.006535 248107 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1115 09:07:18.006683 248107 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1115 09:07:18.006782 248107 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1115 09:07:18.008655 248107 out.go:252] - Generating certificates and keys ...
I1115 09:07:18.008749 248107 kubeadm.go:319] [certs] Using existing ca certificate authority
I1115 09:07:18.008835 248107 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1115 09:07:18.008945 248107 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1115 09:07:18.009030 248107 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1115 09:07:18.009122 248107 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1115 09:07:18.009194 248107 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1115 09:07:18.009276 248107 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1115 09:07:18.009440 248107 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-663794 localhost] and IPs [192.168.39.78 127.0.0.1 ::1]
I1115 09:07:18.009534 248107 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1115 09:07:18.009691 248107 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-663794 localhost] and IPs [192.168.39.78 127.0.0.1 ::1]
I1115 09:07:18.009785 248107 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1115 09:07:18.009891 248107 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1115 09:07:18.009962 248107 kubeadm.go:319] [certs] Generating "sa" key and public key
I1115 09:07:18.010040 248107 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1115 09:07:18.010117 248107 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1115 09:07:18.010199 248107 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1115 09:07:18.010278 248107 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1115 09:07:18.010380 248107 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1115 09:07:18.010436 248107 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1115 09:07:18.010519 248107 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1115 09:07:18.010573 248107 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1115 09:07:18.011819 248107 out.go:252] - Booting up control plane ...
I1115 09:07:18.011940 248107 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1115 09:07:18.012053 248107 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1115 09:07:18.012146 248107 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1115 09:07:18.012275 248107 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1115 09:07:18.012362 248107 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1115 09:07:18.012465 248107 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1115 09:07:18.012549 248107 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1115 09:07:18.012584 248107 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1115 09:07:18.012694 248107 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1115 09:07:18.012786 248107 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1115 09:07:18.012867 248107 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002791632s
I1115 09:07:18.013017 248107 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1115 09:07:18.013138 248107 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.78:8443/livez
I1115 09:07:18.013263 248107 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1115 09:07:18.013370 248107 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1115 09:07:18.013493 248107 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.780182478s
I1115 09:07:18.013556 248107 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.324840451s
I1115 09:07:18.013619 248107 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.503632372s
I1115 09:07:18.013711 248107 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1115 09:07:18.013834 248107 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1115 09:07:18.013919 248107 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1115 09:07:18.014086 248107 kubeadm.go:319] [mark-control-plane] Marking the node addons-663794 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1115 09:07:18.014137 248107 kubeadm.go:319] [bootstrap-token] Using token: bi6n1i.svktgwn7kozvn22r
I1115 09:07:18.015425 248107 out.go:252] - Configuring RBAC rules ...
I1115 09:07:18.015529 248107 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1115 09:07:18.015638 248107 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1115 09:07:18.015779 248107 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1115 09:07:18.015906 248107 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1115 09:07:18.016006 248107 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1115 09:07:18.016096 248107 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1115 09:07:18.016264 248107 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1115 09:07:18.016310 248107 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1115 09:07:18.016348 248107 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1115 09:07:18.016354 248107 kubeadm.go:319]
I1115 09:07:18.016405 248107 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1115 09:07:18.016411 248107 kubeadm.go:319]
I1115 09:07:18.016495 248107 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1115 09:07:18.016513 248107 kubeadm.go:319]
I1115 09:07:18.016560 248107 kubeadm.go:319] mkdir -p $HOME/.kube
I1115 09:07:18.016643 248107 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1115 09:07:18.016718 248107 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1115 09:07:18.016727 248107 kubeadm.go:319]
I1115 09:07:18.016798 248107 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1115 09:07:18.016812 248107 kubeadm.go:319]
I1115 09:07:18.016887 248107 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1115 09:07:18.016900 248107 kubeadm.go:319]
I1115 09:07:18.016958 248107 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1115 09:07:18.017019 248107 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1115 09:07:18.017074 248107 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1115 09:07:18.017079 248107 kubeadm.go:319]
I1115 09:07:18.017157 248107 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1115 09:07:18.017220 248107 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1115 09:07:18.017226 248107 kubeadm.go:319]
I1115 09:07:18.017319 248107 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bi6n1i.svktgwn7kozvn22r \
I1115 09:07:18.017465 248107 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:955850964525d9732287aff1ea5d847a03627ee2de071247980c680415246b6c \
I1115 09:07:18.017499 248107 kubeadm.go:319] --control-plane
I1115 09:07:18.017508 248107 kubeadm.go:319]
I1115 09:07:18.017596 248107 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1115 09:07:18.017606 248107 kubeadm.go:319]
I1115 09:07:18.017670 248107 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bi6n1i.svktgwn7kozvn22r \
I1115 09:07:18.017779 248107 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:955850964525d9732287aff1ea5d847a03627ee2de071247980c680415246b6c
I1115 09:07:18.017790 248107 cni.go:84] Creating CNI manager for ""
I1115 09:07:18.017797 248107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1115 09:07:18.019302 248107 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1115 09:07:18.020439 248107 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1115 09:07:18.036245 248107 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1115 09:07:18.058501 248107 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1115 09:07:18.058589 248107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1115 09:07:18.058627 248107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-663794 minikube.k8s.io/updated_at=2025_11_15T09_07_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0 minikube.k8s.io/name=addons-663794 minikube.k8s.io/primary=true
I1115 09:07:18.102481 248107 ops.go:34] apiserver oom_adj: -16
I1115 09:07:18.228632 248107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1115 09:07:18.729029 248107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1115 09:07:19.228837 248107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1115 09:07:19.728895 248107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1115 09:07:20.228760 248107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1115 09:07:20.728807 248107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1115 09:07:21.229085 248107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1115 09:07:21.729488 248107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1115 09:07:22.229755 248107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1115 09:07:22.728749 248107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1115 09:07:22.848117 248107 kubeadm.go:1114] duration metric: took 4.789603502s to wait for elevateKubeSystemPrivileges
I1115 09:07:22.848165 248107 kubeadm.go:403] duration metric: took 17.045749764s to StartCluster
I1115 09:07:22.848191 248107 settings.go:142] acquiring lock: {Name:mk00f9aa5a46ce077bf17ee5efb58b1b4c2cdbac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1115 09:07:22.848351 248107 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/21895-243545/kubeconfig
I1115 09:07:22.849075 248107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21895-243545/kubeconfig: {Name:mk85b3ca0ac5a906394239d54dc0b40d127f71ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1115 09:07:22.849361 248107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1115 09:07:22.849400 248107 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I1115 09:07:22.849470 248107 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1115 09:07:22.849617 248107 addons.go:70] Setting yakd=true in profile "addons-663794"
I1115 09:07:22.849640 248107 addons.go:239] Setting addon yakd=true in "addons-663794"
I1115 09:07:22.849634 248107 addons.go:70] Setting cloud-spanner=true in profile "addons-663794"
I1115 09:07:22.849664 248107 addons.go:239] Setting addon cloud-spanner=true in "addons-663794"
I1115 09:07:22.849664 248107 config.go:182] Loaded profile config "addons-663794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:07:22.849680 248107 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-663794"
I1115 09:07:22.849693 248107 addons.go:70] Setting default-storageclass=true in profile "addons-663794"
I1115 09:07:22.849699 248107 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-663794"
I1115 09:07:22.849699 248107 host.go:66] Checking if "addons-663794" exists ...
I1115 09:07:22.849703 248107 addons.go:70] Setting registry=true in profile "addons-663794"
I1115 09:07:22.849711 248107 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-663794"
I1115 09:07:22.849718 248107 addons.go:239] Setting addon registry=true in "addons-663794"
I1115 09:07:22.849671 248107 host.go:66] Checking if "addons-663794" exists ...
I1115 09:07:22.849722 248107 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-663794"
I1115 09:07:22.849743 248107 host.go:66] Checking if "addons-663794" exists ...
I1115 09:07:22.849744 248107 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-663794"
I1115 09:07:22.849773 248107 host.go:66] Checking if "addons-663794" exists ...
I1115 09:07:22.850615 248107 addons.go:70] Setting volcano=true in profile "addons-663794"
I1115 09:07:22.850631 248107 addons.go:70] Setting registry-creds=true in profile "addons-663794"
I1115 09:07:22.850638 248107 addons.go:239] Setting addon volcano=true in "addons-663794"
I1115 09:07:22.850650 248107 addons.go:239] Setting addon registry-creds=true in "addons-663794"
I1115 09:07:22.850671 248107 host.go:66] Checking if "addons-663794" exists ...
I1115 09:07:22.850683 248107 host.go:66] Checking if "addons-663794" exists ...
I1115 09:07:22.849675 248107 addons.go:70] Setting storage-provisioner=true in profile "addons-663794"
I1115 09:07:22.850800 248107 addons.go:239] Setting addon storage-provisioner=true in "addons-663794"
I1115 09:07:22.850875 248107 host.go:66] Checking if "addons-663794" exists ...
I1115 09:07:22.850975 248107 addons.go:70] Setting ingress=true in profile "addons-663794"
I1115 09:07:22.850994 248107 addons.go:239] Setting addon ingress=true in "addons-663794"
I1115 09:07:22.851043 248107 host.go:66] Checking if "addons-663794" exists ...
I1115 09:07:22.849654 248107 addons.go:70] Setting metrics-server=true in profile "addons-663794"
I1115 09:07:22.851076 248107 addons.go:239] Setting addon metrics-server=true in "addons-663794"
I1115 09:07:22.851103 248107 host.go:66] Checking if "addons-663794" exists ...
I1115 09:07:22.851281 248107 addons.go:70] Setting gcp-auth=true in profile "addons-663794"
I1115 09:07:22.851306 248107 mustload.go:66] Loading cluster: addons-663794
I1115 09:07:22.851506 248107 config.go:182] Loaded profile config "addons-663794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1115 09:07:22.849684 248107 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-663794"
I1115 09:07:22.849695 248107 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-663794"
I1115 09:07:22.851664 248107 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-663794"
I1115 09:07:22.851699 248107 host.go:66] Checking if "addons-663794" exists ...
I1115 09:07:22.851741 248107 addons.go:70] Setting volumesnapshots=true in profile "addons-663794"
I1115 09:07:22.851764 248107 addons.go:70] Setting ingress-dns=true in profile "addons-663794"
I1115 09:07:22.851772 248107 addons.go:239] Setting addon volumesnapshots=true in "addons-663794"
I1115 09:07:22.851776 248107 addons.go:239] Setting addon ingress-dns=true in "addons-663794"
I1115 09:07:22.851801 248107 host.go:66] Checking if "addons-663794" exists ...
I1115 09:07:22.851817 248107 addons.go:70] Setting inspektor-gadget=true in profile "addons-663794"
I1115 09:07:22.851828 248107 addons.go:239] Setting addon inspektor-gadget=true in "addons-663794"
I1115 09:07:22.851855 248107 host.go:66] Checking if "addons-663794" exists ...
I1115 09:07:22.851641 248107 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-663794"
I1115 09:07:22.852069 248107 host.go:66] Checking if "addons-663794" exists ...
I1115 09:07:22.851803 248107 host.go:66] Checking if "addons-663794" exists ...
I1115 09:07:22.852975 248107 out.go:179] * Verifying Kubernetes components...
I1115 09:07:22.854256 248107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1115 09:07:22.858948 248107 addons.go:239] Setting addon default-storageclass=true in "addons-663794"
I1115 09:07:22.858959 248107 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-663794"
I1115 09:07:22.858989 248107 host.go:66] Checking if "addons-663794" exists ...
I1115 09:07:22.858998 248107 host.go:66] Checking if "addons-663794" exists ...
I1115 09:07:22.859333 248107 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.5
I1115 09:07:22.859344 248107 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
I1115 09:07:22.859417 248107 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1115 09:07:22.859357 248107 out.go:179] - Using image docker.io/registry:3.0.0
W1115 09:07:22.859938 248107 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1115 09:07:22.860621 248107 host.go:66] Checking if "addons-663794" exists ...
I1115 09:07:22.860813 248107 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
I1115 09:07:22.860833 248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1115 09:07:22.861540 248107 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1115 09:07:22.861548 248107 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1115 09:07:22.861620 248107 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1115 09:07:22.861637 248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1115 09:07:22.861541 248107 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
I1115 09:07:22.862336 248107 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1115 09:07:22.862759 248107 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1115 09:07:22.862787 248107 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
I1115 09:07:22.862784 248107 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
I1115 09:07:22.862884 248107 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1115 09:07:22.862972 248107 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1115 09:07:22.863370 248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1115 09:07:22.863687 248107 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1115 09:07:22.864192 248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1115 09:07:22.864571 248107 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
I1115 09:07:22.864595 248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1115 09:07:22.864606 248107 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1115 09:07:22.864619 248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1115 09:07:22.864643 248107 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
I1115 09:07:22.864571 248107 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1115 09:07:22.865061 248107 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1115 09:07:22.865071 248107 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1115 09:07:22.865071 248107 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1115 09:07:22.864675 248107 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1115 09:07:22.864707 248107 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1115 09:07:22.865549 248107 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
I1115 09:07:22.866364 248107 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
I1115 09:07:22.866381 248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
I1115 09:07:22.866380 248107 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1115 09:07:22.867285 248107 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1115 09:07:22.867301 248107 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1115 09:07:22.868118 248107 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
I1115 09:07:22.868159 248107 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1115 09:07:22.868543 248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1115 09:07:22.868907 248107 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1115 09:07:22.869639 248107 out.go:179] - Using image docker.io/busybox:stable
I1115 09:07:22.870365 248107 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
I1115 09:07:22.870674 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.871205 248107 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1115 09:07:22.871221 248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1115 09:07:22.871401 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.871935 248107 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1115 09:07:22.872146 248107 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1115 09:07:22.872167 248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1115 09:07:22.872629 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:07:22.872687 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.872821 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.872873 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:07:22.872957 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.873520 248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
I1115 09:07:22.873596 248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
I1115 09:07:22.873945 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.874574 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:07:22.874612 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.875070 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.875468 248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
I1115 09:07:22.875538 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:07:22.875571 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.876154 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.876662 248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
I1115 09:07:22.876829 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.876931 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:07:22.876966 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.877521 248107 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1115 09:07:22.877641 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:07:22.877650 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.877675 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.877681 248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
I1115 09:07:22.877791 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.878517 248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
I1115 09:07:22.878533 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:07:22.878733 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.879234 248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
I1115 09:07:22.879263 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:07:22.879300 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.879502 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.879664 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.879955 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:07:22.879987 248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
I1115 09:07:22.880031 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.880201 248107 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1115 09:07:22.880387 248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
I1115 09:07:22.880577 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:07:22.880613 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.880789 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:07:22.880825 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.880858 248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
I1115 09:07:22.880947 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.881212 248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
I1115 09:07:22.881680 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:07:22.881703 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.881876 248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
I1115 09:07:22.882064 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.882070 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.882568 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:07:22.882629 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:07:22.882664 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.882700 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.882634 248107 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1115 09:07:22.882900 248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
I1115 09:07:22.882906 248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
I1115 09:07:22.884982 248107 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1115 09:07:22.885982 248107 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1115 09:07:22.886790 248107 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1115 09:07:22.886806 248107 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1115 09:07:22.889623 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.890087 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:07:22.890112 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:22.890278 248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
W1115 09:07:23.298991 248107 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:44588->192.168.39.78:22: read: connection reset by peer
I1115 09:07:23.299029 248107 retry.go:31] will retry after 366.623498ms: ssh: handshake failed: read tcp 192.168.39.1:44588->192.168.39.78:22: read: connection reset by peer
I1115 09:07:23.852976 248107 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1115 09:07:23.853003 248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1115 09:07:23.935387 248107 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
I1115 09:07:23.935414 248107 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1115 09:07:23.946234 248107 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1115 09:07:23.946257 248107 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1115 09:07:23.956926 248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1115 09:07:23.957619 248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1115 09:07:23.962879 248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1115 09:07:23.963937 248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1115 09:07:24.069313 248107 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
I1115 09:07:24.069341 248107 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1115 09:07:24.076533 248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1115 09:07:24.108238 248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1115 09:07:24.139246 248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
I1115 09:07:24.199045 248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1115 09:07:24.208090 248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1115 09:07:24.307325 248107 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1115 09:07:24.307356 248107 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1115 09:07:24.342774 248107 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.493374848s)
I1115 09:07:24.342832 248107 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.488547473s)
I1115 09:07:24.342925 248107 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1115 09:07:24.342974 248107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1115 09:07:24.361260 248107 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
I1115 09:07:24.361288 248107 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1115 09:07:24.376832 248107 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1115 09:07:24.376856 248107 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1115 09:07:24.441920 248107 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1115 09:07:24.441943 248107 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
I1115 09:07:24.441954 248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1115 09:07:24.441954 248107 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1115 09:07:24.659261 248107 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
I1115 09:07:24.659302 248107 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1115 09:07:24.714316 248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1115 09:07:24.754813 248107 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1115 09:07:24.754856 248107 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1115 09:07:24.823376 248107 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1115 09:07:24.823406 248107 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1115 09:07:24.847828 248107 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1115 09:07:24.847863 248107 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1115 09:07:24.883883 248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1115 09:07:24.890982 248107 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
I1115 09:07:24.891016 248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1115 09:07:25.025347 248107 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1115 09:07:25.025382 248107 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1115 09:07:25.154340 248107 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1115 09:07:25.154372 248107 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1115 09:07:25.190218 248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1115 09:07:25.201352 248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1115 09:07:25.427831 248107 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1115 09:07:25.427863 248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1115 09:07:25.566631 248107 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1115 09:07:25.566663 248107 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1115 09:07:25.814586 248107 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1115 09:07:25.814613 248107 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1115 09:07:25.856919 248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1115 09:07:25.896013 248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.939047246s)
I1115 09:07:25.896075 248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.93842362s)
I1115 09:07:25.975633 248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.012718036s)
I1115 09:07:26.146929 248107 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1115 09:07:26.146967 248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1115 09:07:26.672229 248107 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1115 09:07:26.672260 248107 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1115 09:07:27.064960 248107 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1115 09:07:27.064984 248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1115 09:07:27.382699 248107 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1115 09:07:27.382723 248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1115 09:07:27.811481 248107 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1115 09:07:27.811507 248107 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1115 09:07:28.073076 248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1115 09:07:29.133580 248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.169606655s)
I1115 09:07:30.307005 248107 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1115 09:07:30.310176 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:30.310716 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:07:30.310744 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:30.310923 248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
I1115 09:07:30.667161 248107 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1115 09:07:30.849901 248107 addons.go:239] Setting addon gcp-auth=true in "addons-663794"
I1115 09:07:30.849955 248107 host.go:66] Checking if "addons-663794" exists ...
I1115 09:07:30.851964 248107 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1115 09:07:30.854731 248107 main.go:143] libmachine: domain addons-663794 has defined MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:30.855216 248107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:40:3c:f2", ip: ""} in network mk-addons-663794: {Iface:virbr1 ExpiryTime:2025-11-15 10:06:57 +0000 UTC Type:0 Mac:52:54:00:40:3c:f2 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:addons-663794 Clientid:01:52:54:00:40:3c:f2}
I1115 09:07:30.855241 248107 main.go:143] libmachine: domain addons-663794 has defined IP address 192.168.39.78 and MAC address 52:54:00:40:3c:f2 in network mk-addons-663794
I1115 09:07:30.855468 248107 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21895-243545/.minikube/machines/addons-663794/id_rsa Username:docker}
I1115 09:07:31.702326 248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.625745813s)
I1115 09:07:31.702373 248107 addons.go:480] Verifying addon ingress=true in "addons-663794"
I1115 09:07:31.702372 248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.594089756s)
I1115 09:07:31.702536 248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (7.563261864s)
I1115 09:07:31.702588 248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.503516365s)
I1115 09:07:31.702649 248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.494531977s)
I1115 09:07:31.702686 248107 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.359741673s)
I1115 09:07:31.702711 248107 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.359715763s)
I1115 09:07:31.702728 248107 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1115 09:07:31.702791 248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.988410815s)
I1115 09:07:31.702833 248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.818914491s)
I1115 09:07:31.702929 248107 addons.go:480] Verifying addon registry=true in "addons-663794"
I1115 09:07:31.702952 248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.512699412s)
I1115 09:07:31.702976 248107 addons.go:480] Verifying addon metrics-server=true in "addons-663794"
I1115 09:07:31.702996 248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.501600728s)
I1115 09:07:31.703666 248107 node_ready.go:35] waiting up to 6m0s for node "addons-663794" to be "Ready" ...
I1115 09:07:31.705154 248107 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-663794 service yakd-dashboard -n yakd-dashboard
I1115 09:07:31.705167 248107 out.go:179] * Verifying registry addon...
I1115 09:07:31.705161 248107 out.go:179] * Verifying ingress addon...
I1115 09:07:31.707129 248107 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1115 09:07:31.707425 248107 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1115 09:07:31.742288 248107 node_ready.go:49] node "addons-663794" is "Ready"
I1115 09:07:31.742323 248107 node_ready.go:38] duration metric: took 38.616725ms for node "addons-663794" to be "Ready" ...
I1115 09:07:31.742339 248107 api_server.go:52] waiting for apiserver process to appear ...
I1115 09:07:31.742392 248107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1115 09:07:31.778333 248107 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1115 09:07:31.778364 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:31.779070 248107 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1115 09:07:31.779090 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
W1115 09:07:31.793365 248107 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I1115 09:07:32.029695 248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.172729567s)
W1115 09:07:32.029750 248107 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1115 09:07:32.029797 248107 retry.go:31] will retry after 132.54435ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1115 09:07:32.162587 248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1115 09:07:32.211945 248107 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-663794" context rescaled to 1 replicas
I1115 09:07:32.218410 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:32.218627 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:32.801830 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:32.802587 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:33.086717 248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.013588129s)
I1115 09:07:33.086767 248107 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-663794"
I1115 09:07:33.086768 248107 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.234777146s)
I1115 09:07:33.086868 248107 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.344455453s)
I1115 09:07:33.086911 248107 api_server.go:72] duration metric: took 10.237478946s to wait for apiserver process to appear ...
I1115 09:07:33.086922 248107 api_server.go:88] waiting for apiserver healthz status ...
I1115 09:07:33.086945 248107 api_server.go:253] Checking apiserver healthz at https://192.168.39.78:8443/healthz ...
I1115 09:07:33.089135 248107 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
I1115 09:07:33.089172 248107 out.go:179] * Verifying csi-hostpath-driver addon...
I1115 09:07:33.090297 248107 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1115 09:07:33.091208 248107 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1115 09:07:33.091296 248107 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1115 09:07:33.091312 248107 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1115 09:07:33.138023 248107 api_server.go:279] https://192.168.39.78:8443/healthz returned 200:
ok
I1115 09:07:33.139155 248107 api_server.go:141] control plane version: v1.34.1
I1115 09:07:33.139186 248107 api_server.go:131] duration metric: took 52.255311ms to wait for apiserver health ...
I1115 09:07:33.139199 248107 system_pods.go:43] waiting for kube-system pods to appear ...
I1115 09:07:33.147800 248107 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1115 09:07:33.147828 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:33.158009 248107 system_pods.go:59] 20 kube-system pods found
I1115 09:07:33.158051 248107 system_pods.go:61] "amd-gpu-device-plugin-wqpn5" [d0adea6d-3b3e-41d2-8340-2d42b53060e4] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1115 09:07:33.158060 248107 system_pods.go:61] "coredns-66bc5c9577-8jkds" [dd0f8515-daad-4d10-9aba-fcd0e8b6e400] Running
I1115 09:07:33.158069 248107 system_pods.go:61] "coredns-66bc5c9577-cm284" [23ab3d77-85ec-40f3-afff-0a20ae3716f2] Running
I1115 09:07:33.158079 248107 system_pods.go:61] "csi-hostpath-attacher-0" [f3ac8c44-e97a-415a-9da5-2861ac50ed3c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1115 09:07:33.158089 248107 system_pods.go:61] "csi-hostpath-resizer-0" [7d461536-20d4-4e76-ad1a-a96a3fad5a61] Pending
I1115 09:07:33.158098 248107 system_pods.go:61] "csi-hostpathplugin-zsbwn" [6717d9e7-923f-476e-97d5-2384885e4838] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1115 09:07:33.158105 248107 system_pods.go:61] "etcd-addons-663794" [6900238e-53a4-4ca1-a620-18fcc9a25270] Running
I1115 09:07:33.158112 248107 system_pods.go:61] "kube-apiserver-addons-663794" [3091663f-e6e7-4f57-88c7-6992940c38c9] Running
I1115 09:07:33.158122 248107 system_pods.go:61] "kube-controller-manager-addons-663794" [6dbf7d10-e0f0-4d7e-a183-1055122ae05d] Running
I1115 09:07:33.158131 248107 system_pods.go:61] "kube-ingress-dns-minikube" [25109e09-af9d-420d-b989-529552614336] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1115 09:07:33.158138 248107 system_pods.go:61] "kube-proxy-kjfgf" [3eeef006-089f-401e-956f-df7c8c9d9a44] Running
I1115 09:07:33.158145 248107 system_pods.go:61] "kube-scheduler-addons-663794" [0d5017fe-6032-4daf-a785-cf42e429886f] Running
I1115 09:07:33.158155 248107 system_pods.go:61] "metrics-server-85b7d694d7-z4cnh" [a5aaf6d1-1d0d-439f-a5f1-50cd9a24a185] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1115 09:07:33.158168 248107 system_pods.go:61] "nvidia-device-plugin-daemonset-tz8vm" [7fa140f3-685f-4d2a-8467-05ffa2701601] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1115 09:07:33.158177 248107 system_pods.go:61] "registry-6b586f9694-hgvh6" [76662db3-ff4c-4ca1-8587-5d8f12c77a66] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1115 09:07:33.158186 248107 system_pods.go:61] "registry-creds-764b6fb674-ckjls" [05e09078-15a0-4a10-bbcf-6ef46b064286] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1115 09:07:33.158195 248107 system_pods.go:61] "registry-proxy-9tkz8" [527b58a0-a1f0-4419-ac42-b4de22cf8ccb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1115 09:07:33.158202 248107 system_pods.go:61] "snapshot-controller-7d9fbc56b8-6cbw4" [33beb311-2ed8-4dbd-a0e0-297d6eccc21a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1115 09:07:33.158212 248107 system_pods.go:61] "snapshot-controller-7d9fbc56b8-f6qhx" [44385801-5893-4318-b18c-25f5dbedef16] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1115 09:07:33.158222 248107 system_pods.go:61] "storage-provisioner" [5890e29d-b25e-40cb-ae66-27c0be7f0c73] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1115 09:07:33.158231 248107 system_pods.go:74] duration metric: took 19.024157ms to wait for pod list to return data ...
I1115 09:07:33.158244 248107 default_sa.go:34] waiting for default service account to be created ...
I1115 09:07:33.185286 248107 default_sa.go:45] found service account: "default"
I1115 09:07:33.185315 248107 default_sa.go:55] duration metric: took 27.063806ms for default service account to be created ...
I1115 09:07:33.185327 248107 system_pods.go:116] waiting for k8s-apps to be running ...
I1115 09:07:33.202151 248107 system_pods.go:86] 20 kube-system pods found
I1115 09:07:33.202191 248107 system_pods.go:89] "amd-gpu-device-plugin-wqpn5" [d0adea6d-3b3e-41d2-8340-2d42b53060e4] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1115 09:07:33.202200 248107 system_pods.go:89] "coredns-66bc5c9577-8jkds" [dd0f8515-daad-4d10-9aba-fcd0e8b6e400] Running
I1115 09:07:33.202208 248107 system_pods.go:89] "coredns-66bc5c9577-cm284" [23ab3d77-85ec-40f3-afff-0a20ae3716f2] Running
I1115 09:07:33.202250 248107 system_pods.go:89] "csi-hostpath-attacher-0" [f3ac8c44-e97a-415a-9da5-2861ac50ed3c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1115 09:07:33.202261 248107 system_pods.go:89] "csi-hostpath-resizer-0" [7d461536-20d4-4e76-ad1a-a96a3fad5a61] Pending
I1115 09:07:33.202272 248107 system_pods.go:89] "csi-hostpathplugin-zsbwn" [6717d9e7-923f-476e-97d5-2384885e4838] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1115 09:07:33.202281 248107 system_pods.go:89] "etcd-addons-663794" [6900238e-53a4-4ca1-a620-18fcc9a25270] Running
I1115 09:07:33.202289 248107 system_pods.go:89] "kube-apiserver-addons-663794" [3091663f-e6e7-4f57-88c7-6992940c38c9] Running
I1115 09:07:33.202295 248107 system_pods.go:89] "kube-controller-manager-addons-663794" [6dbf7d10-e0f0-4d7e-a183-1055122ae05d] Running
I1115 09:07:33.202309 248107 system_pods.go:89] "kube-ingress-dns-minikube" [25109e09-af9d-420d-b989-529552614336] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1115 09:07:33.202315 248107 system_pods.go:89] "kube-proxy-kjfgf" [3eeef006-089f-401e-956f-df7c8c9d9a44] Running
I1115 09:07:33.202322 248107 system_pods.go:89] "kube-scheduler-addons-663794" [0d5017fe-6032-4daf-a785-cf42e429886f] Running
I1115 09:07:33.202333 248107 system_pods.go:89] "metrics-server-85b7d694d7-z4cnh" [a5aaf6d1-1d0d-439f-a5f1-50cd9a24a185] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1115 09:07:33.202345 248107 system_pods.go:89] "nvidia-device-plugin-daemonset-tz8vm" [7fa140f3-685f-4d2a-8467-05ffa2701601] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1115 09:07:33.202358 248107 system_pods.go:89] "registry-6b586f9694-hgvh6" [76662db3-ff4c-4ca1-8587-5d8f12c77a66] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1115 09:07:33.202367 248107 system_pods.go:89] "registry-creds-764b6fb674-ckjls" [05e09078-15a0-4a10-bbcf-6ef46b064286] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1115 09:07:33.202378 248107 system_pods.go:89] "registry-proxy-9tkz8" [527b58a0-a1f0-4419-ac42-b4de22cf8ccb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1115 09:07:33.202387 248107 system_pods.go:89] "snapshot-controller-7d9fbc56b8-6cbw4" [33beb311-2ed8-4dbd-a0e0-297d6eccc21a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1115 09:07:33.202396 248107 system_pods.go:89] "snapshot-controller-7d9fbc56b8-f6qhx" [44385801-5893-4318-b18c-25f5dbedef16] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1115 09:07:33.202405 248107 system_pods.go:89] "storage-provisioner" [5890e29d-b25e-40cb-ae66-27c0be7f0c73] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1115 09:07:33.202421 248107 system_pods.go:126] duration metric: took 17.082448ms to wait for k8s-apps to be running ...
I1115 09:07:33.202437 248107 system_svc.go:44] waiting for kubelet service to be running ....
I1115 09:07:33.202507 248107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1115 09:07:33.220667 248107 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1115 09:07:33.220693 248107 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1115 09:07:33.226296 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:33.295766 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:33.309919 248107 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1115 09:07:33.309944 248107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1115 09:07:33.359472 248107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1115 09:07:33.608180 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:33.713189 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:33.717149 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:34.100515 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:34.254687 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:34.256283 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:34.602187 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:34.720040 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:34.721271 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:35.098173 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:35.213267 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:35.214996 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:35.597839 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:35.611666 248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.449005846s)
I1115 09:07:35.611712 248107 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.409176256s)
I1115 09:07:35.611738 248107 system_svc.go:56] duration metric: took 2.40929762s WaitForService to wait for kubelet
I1115 09:07:35.611749 248107 kubeadm.go:587] duration metric: took 12.762317792s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1115 09:07:35.611777 248107 node_conditions.go:102] verifying NodePressure condition ...
I1115 09:07:35.611801 248107 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.252297884s)
I1115 09:07:35.613086 248107 addons.go:480] Verifying addon gcp-auth=true in "addons-663794"
I1115 09:07:35.614808 248107 out.go:179] * Verifying gcp-auth addon...
I1115 09:07:35.616083 248107 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1115 09:07:35.616106 248107 node_conditions.go:123] node cpu capacity is 2
I1115 09:07:35.616121 248107 node_conditions.go:105] duration metric: took 4.337898ms to run NodePressure ...
I1115 09:07:35.616135 248107 start.go:242] waiting for startup goroutines ...
I1115 09:07:35.616857 248107 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1115 09:07:35.622909 248107 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1115 09:07:35.622935 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:35.716491 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:35.717682 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:36.096116 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:36.120094 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:36.211403 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:36.211933 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:36.595656 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:36.621021 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:36.711258 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:36.712572 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:37.097058 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:37.122934 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:37.221840 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:37.222106 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:37.597514 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:37.622993 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:37.724467 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:37.728757 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:38.098762 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:38.124957 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:38.213534 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:38.217091 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:38.596983 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:38.621993 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:38.712212 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:38.713704 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:39.096546 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:39.122295 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:39.215826 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:39.216132 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:39.594865 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:39.620922 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:39.712653 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:39.713631 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:40.097017 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:40.122394 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:40.210854 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:40.210899 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:40.596160 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:40.621226 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:40.711567 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:40.712578 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:41.096119 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:41.120992 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:41.212142 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:41.212163 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:41.595487 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:41.620728 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:41.711965 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:41.712158 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:42.095291 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:42.120584 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:42.211176 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:42.211282 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:42.595938 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:42.621049 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:42.711234 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:42.712875 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:43.094391 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:43.120358 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:43.210538 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:43.213000 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:43.595985 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:43.621877 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:43.715594 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:43.715678 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:44.098589 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:44.121896 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:44.211492 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:44.213357 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:44.595434 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:44.621806 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:44.711245 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:44.714288 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:45.097500 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:45.120874 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:45.213062 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:45.214423 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:45.683834 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:45.686540 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:45.787011 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:45.787674 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:46.097100 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:46.121567 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:46.210973 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:46.211377 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:46.597237 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:46.620934 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:46.713168 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:46.713396 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:47.096369 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:47.121346 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:47.211501 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:47.211537 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:47.595670 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:47.620987 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:47.713311 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:47.713719 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:48.095582 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:48.120705 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:48.212080 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:48.212195 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:48.594985 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:48.619634 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:48.713722 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:48.713890 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:49.095817 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:49.123336 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:49.211252 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:49.212341 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:49.597899 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:49.621064 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:49.712658 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:49.716676 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:50.096808 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:50.122166 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:50.213342 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:50.213343 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:50.597233 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:50.619969 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:50.715401 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:50.718000 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:51.096522 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:51.120541 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:51.216763 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:51.217353 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:51.595744 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:51.620869 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:51.713848 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:51.714820 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:52.254179 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:52.254362 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:52.254363 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:52.254465 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:52.597780 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:52.621647 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:52.711011 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:52.711936 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:53.095746 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:53.120615 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:53.214336 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:53.215844 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:53.595983 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:53.621577 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:53.710595 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:53.711146 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:54.096519 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:54.120902 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:54.213059 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:54.214228 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:54.598770 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:54.622464 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:54.713325 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:54.717081 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:55.095511 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:55.121508 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:55.211478 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:55.213957 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:55.596231 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:55.621677 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:55.712664 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:55.714843 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:56.098817 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:56.120815 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:56.212092 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:56.212111 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:56.597473 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:56.621159 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:56.712023 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:56.713774 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:57.095667 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:57.121149 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:57.210927 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:57.212046 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:57.594646 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:57.620186 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:57.710770 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:57.710944 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:58.095213 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:58.120925 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:58.213230 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:58.214117 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:58.598021 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:58.622823 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:58.714407 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:58.714996 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:59.096062 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:59.121660 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:59.212263 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:07:59.213059 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:59.597112 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:07:59.623914 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:07:59.713033 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:07:59.714098 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:00.095401 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:00.120345 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:00.217052 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:00.222199 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:08:00.598089 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:00.622756 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:00.711947 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:08:00.712058 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:01.095693 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:01.122090 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:01.460510 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:08:01.461870 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:01.598364 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:01.622413 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:01.711895 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:08:01.712016 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:02.095338 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:02.120350 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:02.213836 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:02.214144 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:08:02.595588 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:02.620218 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:02.711125 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:08:02.712059 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:03.095567 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:03.120492 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:03.212387 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:08:03.217019 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:03.595180 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:03.620144 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:03.711665 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:08:03.714651 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:04.095680 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:04.121930 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:04.211657 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:04.212981 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:08:04.598786 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:04.621084 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:04.715845 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:08:04.719125 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:05.097048 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:05.121741 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:05.213039 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:05.213237 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:08:05.594709 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:05.620376 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:05.711989 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:05.713122 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:08:06.096134 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:06.121316 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:06.213099 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:06.220961 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:08:06.595754 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:06.621013 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:06.711584 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:06.712144 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:08:07.095204 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:07.119886 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:07.211588 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1115 09:08:07.212769 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:07.598224 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:07.621186 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:07.711095 248107 kapi.go:107] duration metric: took 36.003677244s to wait for kubernetes.io/minikube-addons=registry ...
I1115 09:08:07.711741 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:08.095222 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:08.120582 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:08.211899 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:08.596155 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:08.620458 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:08.710876 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:09.097030 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:09.122479 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:09.211889 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:09.596649 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:09.620327 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:09.711414 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:10.094997 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:10.120753 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:10.211867 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:10.599009 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:10.621628 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:10.711767 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:11.096346 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:11.124858 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:11.211193 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:11.595681 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:11.620333 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:11.711963 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:12.099398 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:12.122003 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:12.213825 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:12.597037 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:12.621146 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:12.712034 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:13.094945 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:13.124482 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:13.214370 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:13.595293 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:13.622365 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:13.713036 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:14.099343 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:14.122417 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:14.216611 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:14.832728 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:14.832884 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:14.833013 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:15.097973 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:15.120257 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:15.210968 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:15.606661 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:15.624260 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:15.712940 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:16.104848 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:16.124037 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:16.213041 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:16.595885 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:16.620876 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:16.711520 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:17.095654 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:17.121042 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:17.211248 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:17.594897 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:17.620841 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:17.711246 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:18.094850 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:18.122511 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:18.211267 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:18.596407 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:18.619894 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:18.711279 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:19.095940 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:19.124812 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:19.212774 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:19.595295 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:19.620382 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:19.710734 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:20.095578 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:20.120624 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:20.211067 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:20.599553 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:20.700544 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:20.711118 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:21.098651 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:21.124164 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:21.213513 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:21.595834 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:21.621694 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:21.710994 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:22.095590 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:22.121979 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:22.212931 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:22.602197 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:22.620121 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:22.716733 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:23.099660 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:23.121953 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:23.213365 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:23.596526 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:23.622522 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:23.714682 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:24.095381 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:24.120057 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:24.211553 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:24.595532 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:24.620787 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:24.712468 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:25.096457 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:25.122485 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:25.211917 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:25.596864 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:25.623016 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:25.712778 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:26.096065 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:26.120681 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:26.212665 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:26.596023 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:26.621030 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:26.713038 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:27.097069 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:27.121661 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:27.212593 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:27.595255 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:27.620031 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:27.711335 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:28.104793 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:28.123971 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:28.212791 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:28.595405 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:28.620579 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:28.712515 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:29.097683 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:29.197420 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:29.213871 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:29.595867 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:29.623778 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:29.711262 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:30.102495 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:30.124019 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:30.211995 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:30.598077 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:30.619671 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:30.711024 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:31.096049 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:31.123811 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:31.212144 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:31.596926 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:31.622938 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:31.713166 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:32.100712 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:32.121367 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:32.217342 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:32.595319 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:32.621094 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:32.713751 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:33.102838 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:33.124879 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:33.215738 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:33.600153 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:33.620073 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:33.712122 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:34.096703 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:34.121904 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:34.214120 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:34.594881 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:34.621650 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:34.711047 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:35.095087 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:35.120601 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:35.211164 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:35.597908 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:35.621382 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:35.710949 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:36.097236 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:36.122108 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:36.213324 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:36.596307 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:36.621065 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:36.711348 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:37.097111 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:37.121519 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:37.215055 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:37.600495 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1115 09:08:37.623157 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:37.711651 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:38.104501 248107 kapi.go:107] duration metric: took 1m5.01328995s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1115 09:08:38.127575 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:38.214834 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:38.624843 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:38.712568 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:39.125145 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:39.215821 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:39.622958 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:39.711746 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:40.120800 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:40.211225 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:40.620499 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:40.711199 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:41.121108 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:41.211302 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:41.625433 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:41.712212 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:42.120862 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:42.213108 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:42.620681 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:42.711799 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:43.120992 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:43.221876 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:43.621934 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:43.723145 248107 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1115 09:08:44.123036 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:44.212187 248107 kapi.go:107] duration metric: took 1m12.505049599s to wait for app.kubernetes.io/name=ingress-nginx ...
I1115 09:08:44.621283 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:45.120804 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:45.621260 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:46.121757 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:46.621844 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:47.122314 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:47.620380 248107 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1115 09:08:48.121786 248107 kapi.go:107] duration metric: took 1m12.504923687s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1115 09:08:48.123321 248107 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-663794 cluster.
I1115 09:08:48.124701 248107 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1115 09:08:48.125904 248107 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1115 09:08:48.127125 248107 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, registry-creds, storage-provisioner, inspektor-gadget, ingress-dns, cloud-spanner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I1115 09:08:48.128299 248107 addons.go:515] duration metric: took 1m25.27885159s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin registry-creds storage-provisioner inspektor-gadget ingress-dns cloud-spanner metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I1115 09:08:48.128348 248107 start.go:247] waiting for cluster config update ...
I1115 09:08:48.128380 248107 start.go:256] writing updated cluster config ...
I1115 09:08:48.128717 248107 ssh_runner.go:195] Run: rm -f paused
I1115 09:08:48.145880 248107 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1115 09:08:48.221356 248107 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cm284" in "kube-system" namespace to be "Ready" or be gone ...
I1115 09:08:48.226968 248107 pod_ready.go:94] pod "coredns-66bc5c9577-cm284" is "Ready"
I1115 09:08:48.226993 248107 pod_ready.go:86] duration metric: took 5.606037ms for pod "coredns-66bc5c9577-cm284" in "kube-system" namespace to be "Ready" or be gone ...
I1115 09:08:48.229298 248107 pod_ready.go:83] waiting for pod "etcd-addons-663794" in "kube-system" namespace to be "Ready" or be gone ...
I1115 09:08:48.233430 248107 pod_ready.go:94] pod "etcd-addons-663794" is "Ready"
I1115 09:08:48.233470 248107 pod_ready.go:86] duration metric: took 4.150626ms for pod "etcd-addons-663794" in "kube-system" namespace to be "Ready" or be gone ...
I1115 09:08:48.235487 248107 pod_ready.go:83] waiting for pod "kube-apiserver-addons-663794" in "kube-system" namespace to be "Ready" or be gone ...
I1115 09:08:48.239516 248107 pod_ready.go:94] pod "kube-apiserver-addons-663794" is "Ready"
I1115 09:08:48.239540 248107 pod_ready.go:86] duration metric: took 4.028368ms for pod "kube-apiserver-addons-663794" in "kube-system" namespace to be "Ready" or be gone ...
I1115 09:08:48.242027 248107 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-663794" in "kube-system" namespace to be "Ready" or be gone ...
I1115 09:08:48.551159 248107 pod_ready.go:94] pod "kube-controller-manager-addons-663794" is "Ready"
I1115 09:08:48.551185 248107 pod_ready.go:86] duration metric: took 309.140415ms for pod "kube-controller-manager-addons-663794" in "kube-system" namespace to be "Ready" or be gone ...
I1115 09:08:48.751092 248107 pod_ready.go:83] waiting for pod "kube-proxy-kjfgf" in "kube-system" namespace to be "Ready" or be gone ...
I1115 09:08:49.150351 248107 pod_ready.go:94] pod "kube-proxy-kjfgf" is "Ready"
I1115 09:08:49.150376 248107 pod_ready.go:86] duration metric: took 399.248241ms for pod "kube-proxy-kjfgf" in "kube-system" namespace to be "Ready" or be gone ...
I1115 09:08:49.352696 248107 pod_ready.go:83] waiting for pod "kube-scheduler-addons-663794" in "kube-system" namespace to be "Ready" or be gone ...
I1115 09:08:49.750874 248107 pod_ready.go:94] pod "kube-scheduler-addons-663794" is "Ready"
I1115 09:08:49.750919 248107 pod_ready.go:86] duration metric: took 398.195381ms for pod "kube-scheduler-addons-663794" in "kube-system" namespace to be "Ready" or be gone ...
I1115 09:08:49.750934 248107 pod_ready.go:40] duration metric: took 1.605004891s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1115 09:08:49.792841 248107 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
I1115 09:08:49.794671 248107 out.go:179] * Done! kubectl is now configured to use "addons-663794" cluster and "default" namespace by default
==> CRI-O <==
Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.603985746Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763197915603828542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588596,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e9371810-ae94-41d2-9306-9850da4a1ed8 name=/runtime.v1.ImageService/ImageFsInfo
Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.605210091Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7070b9bc-65f3-4219-a68a-2ec23c59c7d1 name=/runtime.v1.RuntimeService/ListContainers
Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.605276588Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7070b9bc-65f3-4219-a68a-2ec23c59c7d1 name=/runtime.v1.RuntimeService/ListContainers
Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.606414080Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b8179eace54700cc3708fd8afd689c63ec6930105ac5a7c4bd9f1774974f81a,PodSandboxId:a1545119edac5d7d2e641b0db82abf21a40b2c2a3e1064da6493f6a3cf20714a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763197775405580219,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2b013a75-814d-4176-8b62-830d8b345b7c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45fac1d9822d06822273a1d8fa3b17d5cd4246dbca85a33c0e2b1cff8ffdff53,PodSandboxId:6f4a005ba20f208844ebf692a2a34553898773a1a5d08aa55965951b2578dd04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763197734317265003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 011de042-57a4-4b3e-bb73-a8fb6b5af30b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fae8541a1e2c22112603818756d131c89fb54bb1cac4ed10f6094d64ed2078,PodSandboxId:5f299fe3db0e12f69a87a529f5393cce48aa0bfd3013c1cd70024c4c1146a155,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763197723485155012,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-pnxxs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 53d12e45-c2d8-4e25-97ef-a61c276e30fc,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2dc686e8d69ccf1dc652dc7a485939d7239139ae17c31427e39e3211578a11cc,PodSandboxId:8320cba6a5b9bdb0d371493571e072bd5c116e53e488630e9ce9c44679646102,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763197708939344321,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-z6xbv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: eb9b9b53-404c-4b9a-83bd-cac24a935cc0,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9ef2926ff0429ae9a6a1f11486aa2d2d32224cc046edd45719b09fc45146d05,PodSandboxId:c0c91f0d190ca7a048386c77c31e3c27aada227e4a7fc511c0496e7142dfa6da,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763197696676125425,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-msxbx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50043b77-ff68-42e1-bca4-45c0f89727aa,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f04c322f6ed151b3971655120f17e77c75e3572e7d46e3391b5f3351241c7607,PodSandboxId:d2cd97bf20becb6d96674d2aa432b6f71a4fe46f4b716f51eeee4c10ee45d23c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763197688864305367,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-t6qdh,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 23460e79-de72-494a-8f7c-7a627e197764,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31788a34c6a9c5e267d52b084bf138c5e3503d0f2ef0b36cce944df9b006bd27,PodSandboxId:9480436cb50ca34c8eee940a27ca26b1e94537b193eeb5595cf82b65b331169e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76
812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763197676211378771,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25109e09-af9d-420d-b989-529552614336,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175c0420aa1f982117d3cf16eca5557e5d042c05aca8856bfc07827f92f19f1c,PodSandboxId:e26fb9ea124685a62d0322338493a06e3312de468d48622ef9624bf42c187c12,Metadata:
&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763197653294638445,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wqpn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0adea6d-3b3e-41d2-8340-2d42b53060e4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:546ccdaa0af307f25e29ceb63902aad47edca6e05cef3c5d50038afb813ca7e7,PodSandboxId:e5df1977dc9313dd2608547c3585ba607c
915c3701f6d9d4fee9ca30504f6770,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763197652815511621,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5890e29d-b25e-40cb-ae66-27c0be7f0c73,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b130e9bb0eaa0d5d4a79584522ecd5194d0b509e72a88920c03e1a4bd3da57,PodSandboxId:c53566f7938894374045116450f5596e17d7d022a997bc
48679867fe0d94b498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763197643984322631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-cm284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23ab3d77-85ec-40f3-afff-0a20ae3716f2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2949bab9cc7b4145ecadbbb4001bd89d6b54bc740e9e60afcca89f217f5ff0,PodSandboxId:63f9a0a57163f76d0008f12025b3763402d0267281274f24f0b6302b395c576f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763197643074895057,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kjfgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eeef006-089f-401e-956f-df7c8c9d9a44,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2896ea62bda02545173bdabb5a4163e0ed09aa1e854b3f707d4254b13299a39,PodSandboxId:45c3067eb99e6cb26149ecad26e18cacf94df4ec42a82f198bfe7ce18da80167,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763197631742078737,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-663794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 055e84e2428ddf42f20dbd528dd611a3,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\
"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95b051d48655923d833e585332c0dda91833637a2c6a209c97cca463ea3058ac,PodSandboxId:5e5f8f18eb1905ab2202a68599f5c1aa1884c0c13f66f646fa733d775d221aca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763197631683823512,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-663794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5cf9acbe4bd299e3f9ca6fed8a38
31b,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1157ae27d3b1584080fc56c75eafad29ef6d119e5ce9175725fc78e1eabc92e,PodSandboxId:64aa18e4443823c06c4aeda762900c6b5a10849f2c5f1fb15c9d14b1861459c9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763197631659541771,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-663794,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 77d36b5990749e5bfb68424df61b6733,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41ad4b54254b4e6982c6ad4ec16f9aff3f18ba9bf06439e485e145355b489a9e,PodSandboxId:3e81ea99733dfa782d03d361d45bbca869692ebbbd2ede7ae2689a07c403caaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763197631670584533,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-663794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fad1d6ae4012d5374cd73b293ff20dcd,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7070b9bc-65f3-4219-a68a-2ec23c59c7d1 name=/runtime.v1.RuntimeService/ListContainers
Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.649582307Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e9d0d454-da50-488f-8174-c61e98f3fbc5 name=/runtime.v1.RuntimeService/Version
Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.649705946Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e9d0d454-da50-488f-8174-c61e98f3fbc5 name=/runtime.v1.RuntimeService/Version
Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.651237009Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b38abff0-d3be-49a0-af17-edff86077ea9 name=/runtime.v1.ImageService/ImageFsInfo
Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.652764201Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763197915652725637,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588596,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b38abff0-d3be-49a0-af17-edff86077ea9 name=/runtime.v1.ImageService/ImageFsInfo
Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.653714377Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e5b1cd2-c469-498f-ab47-c241eb35f9eb name=/runtime.v1.RuntimeService/ListContainers
Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.653777579Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e5b1cd2-c469-498f-ab47-c241eb35f9eb name=/runtime.v1.RuntimeService/ListContainers
Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.654198026Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b8179eace54700cc3708fd8afd689c63ec6930105ac5a7c4bd9f1774974f81a,PodSandboxId:a1545119edac5d7d2e641b0db82abf21a40b2c2a3e1064da6493f6a3cf20714a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763197775405580219,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2b013a75-814d-4176-8b62-830d8b345b7c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45fac1d9822d06822273a1d8fa3b17d5cd4246dbca85a33c0e2b1cff8ffdff53,PodSandboxId:6f4a005ba20f208844ebf692a2a34553898773a1a5d08aa55965951b2578dd04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763197734317265003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 011de042-57a4-4b3e-bb73-a8fb6b5af30b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fae8541a1e2c22112603818756d131c89fb54bb1cac4ed10f6094d64ed2078,PodSandboxId:5f299fe3db0e12f69a87a529f5393cce48aa0bfd3013c1cd70024c4c1146a155,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763197723485155012,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-pnxxs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 53d12e45-c2d8-4e25-97ef-a61c276e30fc,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2dc686e8d69ccf1dc652dc7a485939d7239139ae17c31427e39e3211578a11cc,PodSandboxId:8320cba6a5b9bdb0d371493571e072bd5c116e53e488630e9ce9c44679646102,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763197708939344321,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-z6xbv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: eb9b9b53-404c-4b9a-83bd-cac24a935cc0,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9ef2926ff0429ae9a6a1f11486aa2d2d32224cc046edd45719b09fc45146d05,PodSandboxId:c0c91f0d190ca7a048386c77c31e3c27aada227e4a7fc511c0496e7142dfa6da,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763197696676125425,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-msxbx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50043b77-ff68-42e1-bca4-45c0f89727aa,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f04c322f6ed151b3971655120f17e77c75e3572e7d46e3391b5f3351241c7607,PodSandboxId:d2cd97bf20becb6d96674d2aa432b6f71a4fe46f4b716f51eeee4c10ee45d23c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763197688864305367,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-t6qdh,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 23460e79-de72-494a-8f7c-7a627e197764,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31788a34c6a9c5e267d52b084bf138c5e3503d0f2ef0b36cce944df9b006bd27,PodSandboxId:9480436cb50ca34c8eee940a27ca26b1e94537b193eeb5595cf82b65b331169e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76
812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763197676211378771,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25109e09-af9d-420d-b989-529552614336,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175c0420aa1f982117d3cf16eca5557e5d042c05aca8856bfc07827f92f19f1c,PodSandboxId:e26fb9ea124685a62d0322338493a06e3312de468d48622ef9624bf42c187c12,Metadata:
&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763197653294638445,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wqpn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0adea6d-3b3e-41d2-8340-2d42b53060e4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:546ccdaa0af307f25e29ceb63902aad47edca6e05cef3c5d50038afb813ca7e7,PodSandboxId:e5df1977dc9313dd2608547c3585ba607c
915c3701f6d9d4fee9ca30504f6770,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763197652815511621,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5890e29d-b25e-40cb-ae66-27c0be7f0c73,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b130e9bb0eaa0d5d4a79584522ecd5194d0b509e72a88920c03e1a4bd3da57,PodSandboxId:c53566f7938894374045116450f5596e17d7d022a997bc
48679867fe0d94b498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763197643984322631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-cm284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23ab3d77-85ec-40f3-afff-0a20ae3716f2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2949bab9cc7b4145ecadbbb4001bd89d6b54bc740e9e60afcca89f217f5ff0,PodSandboxId:63f9a0a57163f76d0008f12025b3763402d0267281274f24f0b6302b395c576f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763197643074895057,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kjfgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eeef006-089f-401e-956f-df7c8c9d9a44,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2896ea62bda02545173bdabb5a4163e0ed09aa1e854b3f707d4254b13299a39,PodSandboxId:45c3067eb99e6cb26149ecad26e18cacf94df4ec42a82f198bfe7ce18da80167,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763197631742078737,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-663794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 055e84e2428ddf42f20dbd528dd611a3,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\
"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95b051d48655923d833e585332c0dda91833637a2c6a209c97cca463ea3058ac,PodSandboxId:5e5f8f18eb1905ab2202a68599f5c1aa1884c0c13f66f646fa733d775d221aca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763197631683823512,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-663794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5cf9acbe4bd299e3f9ca6fed8a38
31b,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1157ae27d3b1584080fc56c75eafad29ef6d119e5ce9175725fc78e1eabc92e,PodSandboxId:64aa18e4443823c06c4aeda762900c6b5a10849f2c5f1fb15c9d14b1861459c9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763197631659541771,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-663794,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 77d36b5990749e5bfb68424df61b6733,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41ad4b54254b4e6982c6ad4ec16f9aff3f18ba9bf06439e485e145355b489a9e,PodSandboxId:3e81ea99733dfa782d03d361d45bbca869692ebbbd2ede7ae2689a07c403caaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763197631670584533,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-663794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fad1d6ae4012d5374cd73b293ff20dcd,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e5b1cd2-c469-498f-ab47-c241eb35f9eb name=/runtime.v1.RuntimeService/ListContainers
Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.690409063Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=24f50b08-e237-4140-87c5-8c367cd24082 name=/runtime.v1.RuntimeService/Version
Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.690498441Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=24f50b08-e237-4140-87c5-8c367cd24082 name=/runtime.v1.RuntimeService/Version
Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.692136043Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3a02caaa-635a-444e-9c9c-a84adb45897b name=/runtime.v1.ImageService/ImageFsInfo
Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.693422947Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763197915693393878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588596,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a02caaa-635a-444e-9c9c-a84adb45897b name=/runtime.v1.ImageService/ImageFsInfo
Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.694285945Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62f9f9a8-803c-49b7-95a0-5ada02639710 name=/runtime.v1.RuntimeService/ListContainers
Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.694420061Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62f9f9a8-803c-49b7-95a0-5ada02639710 name=/runtime.v1.RuntimeService/ListContainers
Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.694993471Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b8179eace54700cc3708fd8afd689c63ec6930105ac5a7c4bd9f1774974f81a,PodSandboxId:a1545119edac5d7d2e641b0db82abf21a40b2c2a3e1064da6493f6a3cf20714a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763197775405580219,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2b013a75-814d-4176-8b62-830d8b345b7c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45fac1d9822d06822273a1d8fa3b17d5cd4246dbca85a33c0e2b1cff8ffdff53,PodSandboxId:6f4a005ba20f208844ebf692a2a34553898773a1a5d08aa55965951b2578dd04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763197734317265003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 011de042-57a4-4b3e-bb73-a8fb6b5af30b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fae8541a1e2c22112603818756d131c89fb54bb1cac4ed10f6094d64ed2078,PodSandboxId:5f299fe3db0e12f69a87a529f5393cce48aa0bfd3013c1cd70024c4c1146a155,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763197723485155012,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-pnxxs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 53d12e45-c2d8-4e25-97ef-a61c276e30fc,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2dc686e8d69ccf1dc652dc7a485939d7239139ae17c31427e39e3211578a11cc,PodSandboxId:8320cba6a5b9bdb0d371493571e072bd5c116e53e488630e9ce9c44679646102,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763197708939344321,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-z6xbv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: eb9b9b53-404c-4b9a-83bd-cac24a935cc0,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9ef2926ff0429ae9a6a1f11486aa2d2d32224cc046edd45719b09fc45146d05,PodSandboxId:c0c91f0d190ca7a048386c77c31e3c27aada227e4a7fc511c0496e7142dfa6da,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763197696676125425,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-msxbx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50043b77-ff68-42e1-bca4-45c0f89727aa,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f04c322f6ed151b3971655120f17e77c75e3572e7d46e3391b5f3351241c7607,PodSandboxId:d2cd97bf20becb6d96674d2aa432b6f71a4fe46f4b716f51eeee4c10ee45d23c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763197688864305367,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-t6qdh,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 23460e79-de72-494a-8f7c-7a627e197764,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31788a34c6a9c5e267d52b084bf138c5e3503d0f2ef0b36cce944df9b006bd27,PodSandboxId:9480436cb50ca34c8eee940a27ca26b1e94537b193eeb5595cf82b65b331169e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76
812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763197676211378771,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25109e09-af9d-420d-b989-529552614336,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175c0420aa1f982117d3cf16eca5557e5d042c05aca8856bfc07827f92f19f1c,PodSandboxId:e26fb9ea124685a62d0322338493a06e3312de468d48622ef9624bf42c187c12,Metadata:
&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763197653294638445,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wqpn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0adea6d-3b3e-41d2-8340-2d42b53060e4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:546ccdaa0af307f25e29ceb63902aad47edca6e05cef3c5d50038afb813ca7e7,PodSandboxId:e5df1977dc9313dd2608547c3585ba607c
915c3701f6d9d4fee9ca30504f6770,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763197652815511621,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5890e29d-b25e-40cb-ae66-27c0be7f0c73,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b130e9bb0eaa0d5d4a79584522ecd5194d0b509e72a88920c03e1a4bd3da57,PodSandboxId:c53566f7938894374045116450f5596e17d7d022a997bc
48679867fe0d94b498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763197643984322631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-cm284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23ab3d77-85ec-40f3-afff-0a20ae3716f2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2949bab9cc7b4145ecadbbb4001bd89d6b54bc740e9e60afcca89f217f5ff0,PodSandboxId:63f9a0a57163f76d0008f12025b3763402d0267281274f24f0b6302b395c576f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763197643074895057,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kjfgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eeef006-089f-401e-956f-df7c8c9d9a44,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2896ea62bda02545173bdabb5a4163e0ed09aa1e854b3f707d4254b13299a39,PodSandboxId:45c3067eb99e6cb26149ecad26e18cacf94df4ec42a82f198bfe7ce18da80167,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763197631742078737,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-663794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 055e84e2428ddf42f20dbd528dd611a3,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\
"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95b051d48655923d833e585332c0dda91833637a2c6a209c97cca463ea3058ac,PodSandboxId:5e5f8f18eb1905ab2202a68599f5c1aa1884c0c13f66f646fa733d775d221aca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763197631683823512,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-663794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5cf9acbe4bd299e3f9ca6fed8a38
31b,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1157ae27d3b1584080fc56c75eafad29ef6d119e5ce9175725fc78e1eabc92e,PodSandboxId:64aa18e4443823c06c4aeda762900c6b5a10849f2c5f1fb15c9d14b1861459c9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763197631659541771,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-663794,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 77d36b5990749e5bfb68424df61b6733,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41ad4b54254b4e6982c6ad4ec16f9aff3f18ba9bf06439e485e145355b489a9e,PodSandboxId:3e81ea99733dfa782d03d361d45bbca869692ebbbd2ede7ae2689a07c403caaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763197631670584533,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-663794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fad1d6ae4012d5374cd73b293ff20dcd,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=62f9f9a8-803c-49b7-95a0-5ada02639710 name=/runtime.v1.RuntimeService/ListContainers
Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.730301025Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d10e4b77-9345-468a-973a-6fce9e4e488f name=/runtime.v1.RuntimeService/Version
Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.730464175Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d10e4b77-9345-468a-973a-6fce9e4e488f name=/runtime.v1.RuntimeService/Version
Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.731729561Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=afad5861-cdd3-4342-926b-0590e4bb3153 name=/runtime.v1.ImageService/ImageFsInfo
Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.733107281Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763197915733074290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588596,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=afad5861-cdd3-4342-926b-0590e4bb3153 name=/runtime.v1.ImageService/ImageFsInfo
Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.733765055Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26ad6cdc-04d5-4ca3-a6ac-30102f4b2835 name=/runtime.v1.RuntimeService/ListContainers
Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.733832817Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26ad6cdc-04d5-4ca3-a6ac-30102f4b2835 name=/runtime.v1.RuntimeService/ListContainers
Nov 15 09:11:55 addons-663794 crio[812]: time="2025-11-15 09:11:55.734296368Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b8179eace54700cc3708fd8afd689c63ec6930105ac5a7c4bd9f1774974f81a,PodSandboxId:a1545119edac5d7d2e641b0db82abf21a40b2c2a3e1064da6493f6a3cf20714a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763197775405580219,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2b013a75-814d-4176-8b62-830d8b345b7c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45fac1d9822d06822273a1d8fa3b17d5cd4246dbca85a33c0e2b1cff8ffdff53,PodSandboxId:6f4a005ba20f208844ebf692a2a34553898773a1a5d08aa55965951b2578dd04,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763197734317265003,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 011de042-57a4-4b3e-bb73-a8fb6b5af30b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fae8541a1e2c22112603818756d131c89fb54bb1cac4ed10f6094d64ed2078,PodSandboxId:5f299fe3db0e12f69a87a529f5393cce48aa0bfd3013c1cd70024c4c1146a155,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763197723485155012,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-pnxxs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 53d12e45-c2d8-4e25-97ef-a61c276e30fc,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2dc686e8d69ccf1dc652dc7a485939d7239139ae17c31427e39e3211578a11cc,PodSandboxId:8320cba6a5b9bdb0d371493571e072bd5c116e53e488630e9ce9c44679646102,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763197708939344321,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-z6xbv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: eb9b9b53-404c-4b9a-83bd-cac24a935cc0,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9ef2926ff0429ae9a6a1f11486aa2d2d32224cc046edd45719b09fc45146d05,PodSandboxId:c0c91f0d190ca7a048386c77c31e3c27aada227e4a7fc511c0496e7142dfa6da,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763197696676125425,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-msxbx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50043b77-ff68-42e1-bca4-45c0f89727aa,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f04c322f6ed151b3971655120f17e77c75e3572e7d46e3391b5f3351241c7607,PodSandboxId:d2cd97bf20becb6d96674d2aa432b6f71a4fe46f4b716f51eeee4c10ee45d23c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763197688864305367,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-t6qdh,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 23460e79-de72-494a-8f7c-7a627e197764,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31788a34c6a9c5e267d52b084bf138c5e3503d0f2ef0b36cce944df9b006bd27,PodSandboxId:9480436cb50ca34c8eee940a27ca26b1e94537b193eeb5595cf82b65b331169e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76
812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763197676211378771,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25109e09-af9d-420d-b989-529552614336,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175c0420aa1f982117d3cf16eca5557e5d042c05aca8856bfc07827f92f19f1c,PodSandboxId:e26fb9ea124685a62d0322338493a06e3312de468d48622ef9624bf42c187c12,Metadata:
&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763197653294638445,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wqpn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0adea6d-3b3e-41d2-8340-2d42b53060e4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:546ccdaa0af307f25e29ceb63902aad47edca6e05cef3c5d50038afb813ca7e7,PodSandboxId:e5df1977dc9313dd2608547c3585ba607c
915c3701f6d9d4fee9ca30504f6770,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763197652815511621,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5890e29d-b25e-40cb-ae66-27c0be7f0c73,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b130e9bb0eaa0d5d4a79584522ecd5194d0b509e72a88920c03e1a4bd3da57,PodSandboxId:c53566f7938894374045116450f5596e17d7d022a997bc
48679867fe0d94b498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763197643984322631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-cm284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23ab3d77-85ec-40f3-afff-0a20ae3716f2,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2949bab9cc7b4145ecadbbb4001bd89d6b54bc740e9e60afcca89f217f5ff0,PodSandboxId:63f9a0a57163f76d0008f12025b3763402d0267281274f24f0b6302b395c576f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763197643074895057,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kjfgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eeef006-089f-401e-956f-df7c8c9d9a44,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2896ea62bda02545173bdabb5a4163e0ed09aa1e854b3f707d4254b13299a39,PodSandboxId:45c3067eb99e6cb26149ecad26e18cacf94df4ec42a82f198bfe7ce18da80167,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763197631742078737,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-663794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 055e84e2428ddf42f20dbd528dd611a3,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\
"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95b051d48655923d833e585332c0dda91833637a2c6a209c97cca463ea3058ac,PodSandboxId:5e5f8f18eb1905ab2202a68599f5c1aa1884c0c13f66f646fa733d775d221aca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763197631683823512,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-663794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5cf9acbe4bd299e3f9ca6fed8a38
31b,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1157ae27d3b1584080fc56c75eafad29ef6d119e5ce9175725fc78e1eabc92e,PodSandboxId:64aa18e4443823c06c4aeda762900c6b5a10849f2c5f1fb15c9d14b1861459c9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763197631659541771,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-663794,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 77d36b5990749e5bfb68424df61b6733,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41ad4b54254b4e6982c6ad4ec16f9aff3f18ba9bf06439e485e145355b489a9e,PodSandboxId:3e81ea99733dfa782d03d361d45bbca869692ebbbd2ede7ae2689a07c403caaa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763197631670584533,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-663794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fad1d6ae4012d5374cd73b293ff20dcd,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26ad6cdc-04d5-4ca3-a6ac-30102f4b2835 name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
0b8179eace547 docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 2 minutes ago Running nginx 0 a1545119edac5 nginx
45fac1d9822d0 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 3 minutes ago Running busybox 0 6f4a005ba20f2 busybox
a3fae8541a1e2 registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27 3 minutes ago Running controller 0 5f299fe3db0e1 ingress-nginx-controller-6c8bf45fb-pnxxs
2dc686e8d69cc registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f 3 minutes ago Exited patch 0 8320cba6a5b9b ingress-nginx-admission-patch-z6xbv
e9ef2926ff042 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f 3 minutes ago Exited create 0 c0c91f0d190ca ingress-nginx-admission-create-msxbx
f04c322f6ed15 docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef 3 minutes ago Running local-path-provisioner 0 d2cd97bf20bec local-path-provisioner-648f6765c9-t6qdh
31788a34c6a9c docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 3 minutes ago Running minikube-ingress-dns 0 9480436cb50ca kube-ingress-dns-minikube
175c0420aa1f9 docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 e26fb9ea12468 amd-gpu-device-plugin-wqpn5
546ccdaa0af30 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 e5df1977dc931 storage-provisioner
27b130e9bb0ea 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969 4 minutes ago Running coredns 0 c53566f793889 coredns-66bc5c9577-cm284
2d2949bab9cc7 fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7 4 minutes ago Running kube-proxy 0 63f9a0a57163f kube-proxy-kjfgf
a2896ea62bda0 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813 4 minutes ago Running kube-scheduler 0 45c3067eb99e6 kube-scheduler-addons-663794
95b051d486559 c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f 4 minutes ago Running kube-controller-manager 0 5e5f8f18eb190 kube-controller-manager-addons-663794
41ad4b54254b4 c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97 4 minutes ago Running kube-apiserver 0 3e81ea99733df kube-apiserver-addons-663794
e1157ae27d3b1 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115 4 minutes ago Running etcd 0 64aa18e444382 etcd-addons-663794
==> coredns [27b130e9bb0eaa0d5d4a79584522ecd5194d0b509e72a88920c03e1a4bd3da57] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
.:53
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
[INFO] Reloading complete
[INFO] 10.244.0.26:44250 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001056286s
[INFO] 10.244.0.26:46022 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00018473s
==> describe nodes <==
Name: addons-663794
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-663794
kubernetes.io/os=linux
minikube.k8s.io/commit=4506e54a47268cf8484a056886191ddf0e705dd0
minikube.k8s.io/name=addons-663794
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_11_15T09_07_18_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-663794
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 15 Nov 2025 09:07:14 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-663794
AcquireTime: <unset>
RenewTime: Sat, 15 Nov 2025 09:11:52 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 15 Nov 2025 09:09:51 +0000 Sat, 15 Nov 2025 09:07:12 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 15 Nov 2025 09:09:51 +0000 Sat, 15 Nov 2025 09:07:12 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 15 Nov 2025 09:09:51 +0000 Sat, 15 Nov 2025 09:07:12 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 15 Nov 2025 09:09:51 +0000 Sat, 15 Nov 2025 09:07:18 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.78
Hostname: addons-663794
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
System Info:
Machine ID: 39d0412532a9467fac204c898cc459d3
System UUID: 39d04125-32a9-467f-ac20-4c898cc459d3
Boot ID: 5ab78582-c99d-444d-b6bf-1f7065465677
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.34.1
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (14 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m5s
default hello-world-app-5d498dc89-6vxps 0 (0%) 0 (0%) 0 (0%) 0 (0%) 1s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m24s
ingress-nginx ingress-nginx-controller-6c8bf45fb-pnxxs 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m24s
kube-system amd-gpu-device-plugin-wqpn5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m30s
kube-system coredns-66bc5c9577-cm284 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m33s
kube-system etcd-addons-663794 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m38s
kube-system kube-apiserver-addons-663794 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m40s
kube-system kube-controller-manager-addons-663794 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m38s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m27s
kube-system kube-proxy-kjfgf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m33s
kube-system kube-scheduler-addons-663794 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m38s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m26s
local-path-storage local-path-provisioner-648f6765c9-t6qdh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m26s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m31s kube-proxy
Normal Starting 4m45s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m45s (x8 over 4m45s) kubelet Node addons-663794 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m45s (x8 over 4m45s) kubelet Node addons-663794 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m45s (x7 over 4m45s) kubelet Node addons-663794 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m45s kubelet Updated Node Allocatable limit across pods
Normal Starting 4m38s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m38s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m38s kubelet Node addons-663794 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m38s kubelet Node addons-663794 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m38s kubelet Node addons-663794 status is now: NodeHasSufficientPID
Normal NodeReady 4m37s kubelet Node addons-663794 status is now: NodeReady
Normal RegisteredNode 4m34s node-controller Node addons-663794 event: Registered Node addons-663794 in Controller
==> dmesg <==
[ +0.030977] kauditd_printk_skb: 293 callbacks suppressed
[ +3.715080] kauditd_printk_skb: 404 callbacks suppressed
[ +5.960734] kauditd_printk_skb: 5 callbacks suppressed
[ +9.521316] kauditd_printk_skb: 11 callbacks suppressed
[Nov15 09:08] kauditd_printk_skb: 26 callbacks suppressed
[ +7.292908] kauditd_printk_skb: 32 callbacks suppressed
[ +6.069759] kauditd_printk_skb: 5 callbacks suppressed
[ +3.192395] kauditd_printk_skb: 46 callbacks suppressed
[ +5.133658] kauditd_printk_skb: 116 callbacks suppressed
[ +0.959743] kauditd_printk_skb: 168 callbacks suppressed
[ +0.000034] kauditd_printk_skb: 98 callbacks suppressed
[ +5.417162] kauditd_printk_skb: 41 callbacks suppressed
[ +0.000078] kauditd_printk_skb: 23 callbacks suppressed
[ +5.322707] kauditd_printk_skb: 41 callbacks suppressed
[Nov15 09:09] kauditd_printk_skb: 2 callbacks suppressed
[ +5.902259] kauditd_printk_skb: 22 callbacks suppressed
[ +5.000021] kauditd_printk_skb: 38 callbacks suppressed
[ +2.361622] kauditd_printk_skb: 105 callbacks suppressed
[ +2.669380] kauditd_printk_skb: 174 callbacks suppressed
[ +0.687785] kauditd_printk_skb: 135 callbacks suppressed
[ +0.000032] kauditd_printk_skb: 88 callbacks suppressed
[ +7.592297] kauditd_printk_skb: 101 callbacks suppressed
[Nov15 09:10] kauditd_printk_skb: 10 callbacks suppressed
[ +6.846231] kauditd_printk_skb: 41 callbacks suppressed
[Nov15 09:11] kauditd_printk_skb: 127 callbacks suppressed
==> etcd [e1157ae27d3b1584080fc56c75eafad29ef6d119e5ce9175725fc78e1eabc92e] <==
{"level":"warn","ts":"2025-11-15T09:08:14.824246Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.575485ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingress\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-15T09:08:14.824265Z","caller":"traceutil/trace.go:172","msg":"trace[377924183] range","detail":"{range_begin:/registry/ingress; range_end:; response_count:0; response_revision:1020; }","duration":"217.597067ms","start":"2025-11-15T09:08:14.606663Z","end":"2025-11-15T09:08:14.824260Z","steps":["trace[377924183] 'agreement among raft nodes before linearized reading' (duration: 217.551674ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-15T09:08:14.824350Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.776699ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-15T09:08:14.824362Z","caller":"traceutil/trace.go:172","msg":"trace[767992002] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1020; }","duration":"118.789442ms","start":"2025-11-15T09:08:14.705569Z","end":"2025-11-15T09:08:14.824358Z","steps":["trace[767992002] 'agreement among raft nodes before linearized reading' (duration: 118.767726ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-15T09:08:14.824454Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"210.210582ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-15T09:08:14.824466Z","caller":"traceutil/trace.go:172","msg":"trace[1805512740] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1020; }","duration":"210.223166ms","start":"2025-11-15T09:08:14.614239Z","end":"2025-11-15T09:08:14.824462Z","steps":["trace[1805512740] 'agreement among raft nodes before linearized reading' (duration: 210.203452ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-15T09:08:16.557562Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T09:08:16.253153Z","time spent":"304.406864ms","remote":"127.0.0.1:39208","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
{"level":"info","ts":"2025-11-15T09:08:27.062862Z","caller":"traceutil/trace.go:172","msg":"trace[1709609591] linearizableReadLoop","detail":"{readStateIndex:1091; appliedIndex:1091; }","duration":"145.495768ms","start":"2025-11-15T09:08:26.917344Z","end":"2025-11-15T09:08:27.062840Z","steps":["trace[1709609591] 'read index received' (duration: 145.490502ms)","trace[1709609591] 'applied index is now lower than readState.Index' (duration: 4.429µs)"],"step_count":2}
{"level":"warn","ts":"2025-11-15T09:08:27.063091Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"145.675015ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
{"level":"info","ts":"2025-11-15T09:08:27.063119Z","caller":"traceutil/trace.go:172","msg":"trace[1077825839] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1059; }","duration":"145.771199ms","start":"2025-11-15T09:08:26.917341Z","end":"2025-11-15T09:08:27.063112Z","steps":["trace[1077825839] 'agreement among raft nodes before linearized reading' (duration: 145.597525ms)"],"step_count":1}
{"level":"info","ts":"2025-11-15T09:08:27.064586Z","caller":"traceutil/trace.go:172","msg":"trace[290919413] transaction","detail":"{read_only:false; response_revision:1060; number_of_response:1; }","duration":"205.8024ms","start":"2025-11-15T09:08:26.858771Z","end":"2025-11-15T09:08:27.064573Z","steps":["trace[290919413] 'process raft request' (duration: 205.163928ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-15T09:08:41.570742Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"251.708725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-15T09:08:41.570876Z","caller":"traceutil/trace.go:172","msg":"trace[1839882819] range","detail":"{range_begin:/registry/replicasets; range_end:; response_count:0; response_revision:1167; }","duration":"251.88631ms","start":"2025-11-15T09:08:41.318974Z","end":"2025-11-15T09:08:41.570860Z","steps":["trace[1839882819] 'range keys from in-memory index tree' (duration: 251.650651ms)"],"step_count":1}
{"level":"info","ts":"2025-11-15T09:09:15.666078Z","caller":"traceutil/trace.go:172","msg":"trace[509285057] transaction","detail":"{read_only:false; response_revision:1359; number_of_response:1; }","duration":"204.273294ms","start":"2025-11-15T09:09:15.461754Z","end":"2025-11-15T09:09:15.666028Z","steps":["trace[509285057] 'process raft request' (duration: 204.13976ms)"],"step_count":1}
{"level":"info","ts":"2025-11-15T09:09:16.930969Z","caller":"traceutil/trace.go:172","msg":"trace[94193585] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1384; }","duration":"180.967168ms","start":"2025-11-15T09:09:16.749991Z","end":"2025-11-15T09:09:16.930958Z","steps":["trace[94193585] 'process raft request' (duration: 180.823665ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-15T09:09:18.212152Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.603948ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-15T09:09:18.212230Z","caller":"traceutil/trace.go:172","msg":"trace[828949300] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1394; }","duration":"113.713642ms","start":"2025-11-15T09:09:18.098503Z","end":"2025-11-15T09:09:18.212216Z","steps":["trace[828949300] 'range keys from in-memory index tree' (duration: 113.50551ms)"],"step_count":1}
{"level":"info","ts":"2025-11-15T09:09:43.682190Z","caller":"traceutil/trace.go:172","msg":"trace[2142164999] transaction","detail":"{read_only:false; response_revision:1641; number_of_response:1; }","duration":"115.756208ms","start":"2025-11-15T09:09:43.566420Z","end":"2025-11-15T09:09:43.682176Z","steps":["trace[2142164999] 'process raft request' (duration: 115.603856ms)"],"step_count":1}
{"level":"info","ts":"2025-11-15T09:10:06.364616Z","caller":"traceutil/trace.go:172","msg":"trace[1037207073] linearizableReadLoop","detail":"{readStateIndex:1804; appliedIndex:1804; }","duration":"256.834129ms","start":"2025-11-15T09:10:06.107753Z","end":"2025-11-15T09:10:06.364588Z","steps":["trace[1037207073] 'read index received' (duration: 256.826377ms)","trace[1037207073] 'applied index is now lower than readState.Index' (duration: 6.938µs)"],"step_count":2}
{"level":"warn","ts":"2025-11-15T09:10:06.365462Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.874903ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.78\" limit:1 ","response":"range_response_count:1 size:133"}
{"level":"info","ts":"2025-11-15T09:10:06.365522Z","caller":"traceutil/trace.go:172","msg":"trace[1661939143] range","detail":"{range_begin:/registry/masterleases/192.168.39.78; range_end:; response_count:1; response_revision:1743; }","duration":"104.956768ms","start":"2025-11-15T09:10:06.260555Z","end":"2025-11-15T09:10:06.365512Z","steps":["trace[1661939143] 'agreement among raft nodes before linearized reading' (duration: 104.770759ms)"],"step_count":1}
{"level":"info","ts":"2025-11-15T09:10:06.365589Z","caller":"traceutil/trace.go:172","msg":"trace[99288795] transaction","detail":"{read_only:false; response_revision:1743; number_of_response:1; }","duration":"379.762156ms","start":"2025-11-15T09:10:05.985805Z","end":"2025-11-15T09:10:06.365567Z","steps":["trace[99288795] 'process raft request' (duration: 378.802432ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-15T09:10:06.365739Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-15T09:10:05.985720Z","time spent":"379.903993ms","remote":"127.0.0.1:39466","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1741 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
{"level":"warn","ts":"2025-11-15T09:10:06.366697Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"258.948187ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-15T09:10:06.366721Z","caller":"traceutil/trace.go:172","msg":"trace[709321246] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1743; }","duration":"258.978183ms","start":"2025-11-15T09:10:06.107736Z","end":"2025-11-15T09:10:06.366714Z","steps":["trace[709321246] 'agreement among raft nodes before linearized reading' (duration: 256.973844ms)"],"step_count":1}
==> kernel <==
09:11:56 up 5 min, 0 users, load average: 0.40, 0.97, 0.50
Linux addons-663794 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Nov 1 20:49:51 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [41ad4b54254b4e6982c6ad4ec16f9aff3f18ba9bf06439e485e145355b489a9e] <==
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
E1115 09:08:04.338696 1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.70.92:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.70.92:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.70.92:443: connect: connection refused" logger="UnhandledError"
I1115 09:08:04.387422 1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E1115 09:09:01.608795 1 conn.go:339] Error on socket receive: read tcp 192.168.39.78:8443->192.168.39.1:54710: use of closed network connection
E1115 09:09:01.798605 1 conn.go:339] Error on socket receive: read tcp 192.168.39.78:8443->192.168.39.1:54748: use of closed network connection
I1115 09:09:10.875528 1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.193.91"}
I1115 09:09:30.933482 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
I1115 09:09:31.106451 1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.94.245"}
I1115 09:09:51.387269 1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I1115 09:10:05.359146 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I1115 09:10:08.567213 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1115 09:10:08.567463 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1115 09:10:08.603443 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1115 09:10:08.603639 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1115 09:10:08.613440 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1115 09:10:08.613485 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1115 09:10:08.628764 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1115 09:10:08.629461 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1115 09:10:08.736678 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1115 09:10:08.736785 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W1115 09:10:09.614438 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W1115 09:10:09.737802 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W1115 09:10:09.857238 1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
I1115 09:11:54.568441 1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.225.159"}
==> kube-controller-manager [95b051d48655923d833e585332c0dda91833637a2c6a209c97cca463ea3058ac] <==
E1115 09:10:19.407964 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1115 09:10:20.171437 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1115 09:10:20.172460 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
I1115 09:10:21.467475 1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
I1115 09:10:21.467521 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1115 09:10:21.518544 1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
I1115 09:10:21.518650 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
E1115 09:10:25.696382 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1115 09:10:25.697309 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1115 09:10:27.783768 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1115 09:10:27.785118 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1115 09:10:28.773093 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1115 09:10:28.774125 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1115 09:10:48.104941 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1115 09:10:48.106128 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1115 09:10:48.973734 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1115 09:10:48.974997 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1115 09:10:49.178319 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1115 09:10:49.179248 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1115 09:11:17.044533 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1115 09:11:17.045725 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1115 09:11:23.241364 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1115 09:11:23.242507 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1115 09:11:39.269593 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1115 09:11:39.270643 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
==> kube-proxy [2d2949bab9cc7b4145ecadbbb4001bd89d6b54bc740e9e60afcca89f217f5ff0] <==
I1115 09:07:23.829221 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1115 09:07:23.935880 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1115 09:07:23.935923 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.78"]
E1115 09:07:23.936020 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1115 09:07:24.126269 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1115 09:07:24.126359 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1115 09:07:24.126387 1 server_linux.go:132] "Using iptables Proxier"
I1115 09:07:24.268967 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1115 09:07:24.274748 1 server.go:527] "Version info" version="v1.34.1"
I1115 09:07:24.274950 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1115 09:07:24.387412 1 config.go:106] "Starting endpoint slice config controller"
I1115 09:07:24.388991 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1115 09:07:24.389532 1 config.go:403] "Starting serviceCIDR config controller"
I1115 09:07:24.391253 1 config.go:200] "Starting service config controller"
I1115 09:07:24.404299 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1115 09:07:24.398735 1 config.go:309] "Starting node config controller"
I1115 09:07:24.405931 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1115 09:07:24.406597 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1115 09:07:24.404167 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1115 09:07:24.496942 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1115 09:07:24.505985 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1115 09:07:24.508476 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
==> kube-scheduler [a2896ea62bda02545173bdabb5a4163e0ed09aa1e854b3f707d4254b13299a39] <==
E1115 09:07:14.400290 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1115 09:07:14.400426 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1115 09:07:14.400428 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1115 09:07:14.400546 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1115 09:07:14.400577 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1115 09:07:14.400729 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1115 09:07:14.400743 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1115 09:07:14.400816 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1115 09:07:14.400982 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1115 09:07:14.400979 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1115 09:07:15.283923 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1115 09:07:15.289442 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1115 09:07:15.299089 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1115 09:07:15.462481 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1115 09:07:15.474626 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1115 09:07:15.510284 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1115 09:07:15.532222 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1115 09:07:15.575368 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1115 09:07:15.593513 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1115 09:07:15.598324 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1115 09:07:15.607760 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1115 09:07:15.609995 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1115 09:07:15.632790 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1115 09:07:15.659298 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
I1115 09:07:17.791499 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Nov 15 09:10:19 addons-663794 kubelet[1494]: I1115 09:10:19.154169 1494 scope.go:117] "RemoveContainer" containerID="c6829847745f7047c1bc63fb8424a148760819355c00a3656c78baef2b3593d6"
Nov 15 09:10:19 addons-663794 kubelet[1494]: I1115 09:10:19.276740 1494 scope.go:117] "RemoveContainer" containerID="fa5c06d12276ddd7e1d0cb996d9d162fdd1dfeb3bd565989804a51f0a133b537"
Nov 15 09:10:19 addons-663794 kubelet[1494]: I1115 09:10:19.393332 1494 scope.go:117] "RemoveContainer" containerID="1ef13e73ed365e69801a6b7ca589b6fab8bcc0ea40e8500113254b888618fb06"
Nov 15 09:10:27 addons-663794 kubelet[1494]: E1115 09:10:27.562941 1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763197827562174336 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 15 09:10:27 addons-663794 kubelet[1494]: E1115 09:10:27.562986 1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763197827562174336 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 15 09:10:34 addons-663794 kubelet[1494]: I1115 09:10:34.334462 1494 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-wqpn5" secret="" err="secret \"gcp-auth\" not found"
Nov 15 09:10:37 addons-663794 kubelet[1494]: E1115 09:10:37.565333 1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763197837564885153 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 15 09:10:37 addons-663794 kubelet[1494]: E1115 09:10:37.565358 1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763197837564885153 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 15 09:10:47 addons-663794 kubelet[1494]: E1115 09:10:47.568331 1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763197847567728584 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 15 09:10:47 addons-663794 kubelet[1494]: E1115 09:10:47.568362 1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763197847567728584 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 15 09:10:57 addons-663794 kubelet[1494]: E1115 09:10:57.571420 1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763197857570926903 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 15 09:10:57 addons-663794 kubelet[1494]: E1115 09:10:57.571448 1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763197857570926903 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 15 09:11:07 addons-663794 kubelet[1494]: E1115 09:11:07.574911 1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763197867574315420 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 15 09:11:07 addons-663794 kubelet[1494]: E1115 09:11:07.574937 1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763197867574315420 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 15 09:11:17 addons-663794 kubelet[1494]: E1115 09:11:17.578434 1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763197877577715698 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 15 09:11:17 addons-663794 kubelet[1494]: E1115 09:11:17.578519 1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763197877577715698 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 15 09:11:27 addons-663794 kubelet[1494]: E1115 09:11:27.581370 1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763197887580840089 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 15 09:11:27 addons-663794 kubelet[1494]: E1115 09:11:27.581417 1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763197887580840089 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 15 09:11:29 addons-663794 kubelet[1494]: I1115 09:11:29.334026 1494 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Nov 15 09:11:37 addons-663794 kubelet[1494]: I1115 09:11:37.338453 1494 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-wqpn5" secret="" err="secret \"gcp-auth\" not found"
Nov 15 09:11:37 addons-663794 kubelet[1494]: E1115 09:11:37.585205 1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763197897584774738 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 15 09:11:37 addons-663794 kubelet[1494]: E1115 09:11:37.585230 1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763197897584774738 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 15 09:11:47 addons-663794 kubelet[1494]: E1115 09:11:47.589124 1494 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763197907588649386 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 15 09:11:47 addons-663794 kubelet[1494]: E1115 09:11:47.589473 1494 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763197907588649386 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588596} inodes_used:{value:201}}"
Nov 15 09:11:54 addons-663794 kubelet[1494]: I1115 09:11:54.615638 1494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n49fv\" (UniqueName: \"kubernetes.io/projected/05b53b7c-b0c3-4d2a-97a7-a8393de1fdca-kube-api-access-n49fv\") pod \"hello-world-app-5d498dc89-6vxps\" (UID: \"05b53b7c-b0c3-4d2a-97a7-a8393de1fdca\") " pod="default/hello-world-app-5d498dc89-6vxps"
==> storage-provisioner [546ccdaa0af307f25e29ceb63902aad47edca6e05cef3c5d50038afb813ca7e7] <==
W1115 09:11:30.820592 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:11:32.824528 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:11:32.834244 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:11:34.837499 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:11:34.843602 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:11:36.847618 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:11:36.853737 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:11:38.857679 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:11:38.862843 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:11:40.866031 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:11:40.878627 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:11:42.882421 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:11:42.889620 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:11:44.892816 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:11:44.899521 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:11:46.902852 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:11:46.908252 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:11:48.912678 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:11:48.919734 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:11:50.922635 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:11:50.926948 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:11:52.931358 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:11:52.937622 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:11:54.942870 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1115 09:11:54.952463 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-663794 -n addons-663794
helpers_test.go:269: (dbg) Run: kubectl --context addons-663794 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-6vxps ingress-nginx-admission-create-msxbx ingress-nginx-admission-patch-z6xbv
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context addons-663794 describe pod hello-world-app-5d498dc89-6vxps ingress-nginx-admission-create-msxbx ingress-nginx-admission-patch-z6xbv
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-663794 describe pod hello-world-app-5d498dc89-6vxps ingress-nginx-admission-create-msxbx ingress-nginx-admission-patch-z6xbv: exit status 1 (74.625485ms)
-- stdout --
Name: hello-world-app-5d498dc89-6vxps
Namespace: default
Priority: 0
Service Account: default
Node: addons-663794/192.168.39.78
Start Time: Sat, 15 Nov 2025 09:11:54 +0000
Labels: app=hello-world-app
pod-template-hash=5d498dc89
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-world-app-5d498dc89
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n49fv (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-n49fv:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2s default-scheduler Successfully assigned default/hello-world-app-5d498dc89-6vxps to addons-663794
Normal Pulling 1s kubelet Pulling image "docker.io/kicbase/echo-server:1.0"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-msxbx" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-z6xbv" not found
** /stderr **
helpers_test.go:287: kubectl --context addons-663794 describe pod hello-world-app-5d498dc89-6vxps ingress-nginx-admission-create-msxbx ingress-nginx-admission-patch-z6xbv: exit status 1
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-663794 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-663794 addons disable ingress-dns --alsologtostderr -v=1: (1.710136363s)
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-663794 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-663794 addons disable ingress --alsologtostderr -v=1: (7.711676595s)
--- FAIL: TestAddons/parallel/Ingress (155.58s)